Test Report: Docker_Linux_crio 20506

                    
                      6319ed1cff2ab87f49806f23f2b58db8faa9bede:2025-04-01:38963
                    
                

Test fail (21/323)

Order failed test Duration
36 TestAddons/parallel/Ingress 154.84
337 TestStartStop/group/old-k8s-version/serial/FirstStart 298.92
342 TestStartStop/group/no-preload/serial/FirstStart 287.65
347 TestStartStop/group/embed-certs/serial/FirstStart 274.82
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 267.94
350 TestStartStop/group/no-preload/serial/DeployApp 484.29
351 TestStartStop/group/embed-certs/serial/DeployApp 485.03
352 TestStartStop/group/old-k8s-version/serial/DeployApp 485.07
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 484.79
357 TestStartStop/group/no-preload/serial/SecondStart 250.61
365 TestStartStop/group/embed-certs/serial/SecondStart 255.84
367 TestStartStop/group/old-k8s-version/serial/SecondStart 256.19
369 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 250.46
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.39
371 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.34
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.49
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.7
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 235.68
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 254.07
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 241.45
377 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 216.91
x
+
TestAddons/parallel/Ingress (154.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-649141 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-649141 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-649141 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a8838879-e265-4986-8dec-4752cf2d4c7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a8838879-e265-4986-8dec-4752cf2d4c7d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.002826594s
I0401 19:49:28.986006   23163 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-649141 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.725494416s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-649141 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-649141
helpers_test.go:235: (dbg) docker inspect addons-649141:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72",
	        "Created": "2025-04-01T19:46:28.764726981Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 25195,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T19:46:28.796025349Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72/hostname",
	        "HostsPath": "/var/lib/docker/containers/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72/hosts",
	        "LogPath": "/var/lib/docker/containers/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72-json.log",
	        "Name": "/addons-649141",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-649141:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-649141",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72",
	                "LowerDir": "/var/lib/docker/overlay2/8494eff2a00b8cb0109fae06e19aa67084bb9ada260361da0029740054ff19a2-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8494eff2a00b8cb0109fae06e19aa67084bb9ada260361da0029740054ff19a2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8494eff2a00b8cb0109fae06e19aa67084bb9ada260361da0029740054ff19a2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8494eff2a00b8cb0109fae06e19aa67084bb9ada260361da0029740054ff19a2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-649141",
	                "Source": "/var/lib/docker/volumes/addons-649141/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-649141",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-649141",
	                "name.minikube.sigs.k8s.io": "addons-649141",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "da1b1111bde7f901acd554cc3130b2e55480ceeb518f07f6f86fb99f5471b470",
	            "SandboxKey": "/var/run/docker/netns/da1b1111bde7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-649141": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:b9:cc:ae:62:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "644d76732c4a0c4973acda71c2c212dc197282694f38691c2b5ce0c704035832",
	                    "EndpointID": "80ef5c498737a29e13b4993d539ed4450b494fd51d291fc3d9a13e74f1910c75",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-649141",
	                        "663e65e28bd5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-649141 -n addons-649141
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 logs -n 25: (1.104657101s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-954986 | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC |                     |
	|         | download-docker-954986                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-954986                                                                   | download-docker-954986 | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC | 01 Apr 25 19:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-842965   | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC |                     |
	|         | binary-mirror-842965                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40093                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-842965                                                                     | binary-mirror-842965   | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC | 01 Apr 25 19:46 UTC |
	| addons  | enable dashboard -p                                                                         | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC |                     |
	|         | addons-649141                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC |                     |
	|         | addons-649141                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-649141 --wait=true                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:46 UTC | 01 Apr 25 19:48 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:48 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:48 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:48 UTC |
	|         | -p addons-649141                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:48 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:48 UTC |
	|         | amd-gpu-device-plugin                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:48 UTC | 01 Apr 25 19:49 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ip      | addons-649141 ip                                                                            | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-649141 ssh cat                                                                       | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | /opt/local-path-provisioner/pvc-dcafb04a-54c9-48ba-b8f1-ef3390737a6d_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-649141 addons disable                                                                | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-649141 ssh curl -s                                                                   | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-649141 addons                                                                        | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:49 UTC | 01 Apr 25 19:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-649141 ip                                                                            | addons-649141          | jenkins | v1.35.0 | 01 Apr 25 19:51 UTC | 01 Apr 25 19:51 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:46:06
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:46:06.564112   24564 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:46:06.564216   24564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:46:06.564225   24564 out.go:358] Setting ErrFile to fd 2...
	I0401 19:46:06.564236   24564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:46:06.564418   24564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 19:46:06.564959   24564 out.go:352] Setting JSON to false
	I0401 19:46:06.565731   24564 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1713,"bootTime":1743535054,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:46:06.565832   24564 start.go:139] virtualization: kvm guest
	I0401 19:46:06.567721   24564 out.go:177] * [addons-649141] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:46:06.569126   24564 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:46:06.569159   24564 notify.go:220] Checking for updates...
	I0401 19:46:06.571307   24564 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:46:06.572494   24564 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:46:06.573609   24564 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 19:46:06.574850   24564 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:46:06.576338   24564 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:46:06.577684   24564 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:46:06.598351   24564 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 19:46:06.598433   24564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:46:06.646313   24564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-01 19:46:06.638078761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:46:06.646401   24564 docker.go:318] overlay module found
	I0401 19:46:06.648015   24564 out.go:177] * Using the docker driver based on user configuration
	I0401 19:46:06.649209   24564 start.go:297] selected driver: docker
	I0401 19:46:06.649220   24564 start.go:901] validating driver "docker" against <nil>
	I0401 19:46:06.649231   24564 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:46:06.649987   24564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:46:06.694215   24564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-01 19:46:06.686042443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:46:06.694399   24564 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:46:06.694576   24564 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:46:06.696209   24564 out.go:177] * Using Docker driver with root privileges
	I0401 19:46:06.697365   24564 cni.go:84] Creating CNI manager for ""
	I0401 19:46:06.697420   24564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 19:46:06.697429   24564 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 19:46:06.697477   24564 start.go:340] cluster config:
	{Name:addons-649141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-649141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:46:06.698864   24564 out.go:177] * Starting "addons-649141" primary control-plane node in "addons-649141" cluster
	I0401 19:46:06.700146   24564 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 19:46:06.701419   24564 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 19:46:06.702646   24564 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:46:06.702671   24564 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 19:46:06.702683   24564 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 19:46:06.702689   24564 cache.go:56] Caching tarball of preloaded images
	I0401 19:46:06.702774   24564 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 19:46:06.702790   24564 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 19:46:06.703110   24564 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/config.json ...
	I0401 19:46:06.703133   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/config.json: {Name:mkf7629c64033a4d7a443a3bdcdf23ac8c34394c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:06.719178   24564 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0401 19:46:06.719307   24564 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0401 19:46:06.719326   24564 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0401 19:46:06.719332   24564 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0401 19:46:06.719344   24564 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0401 19:46:06.719355   24564 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from local cache
	I0401 19:46:18.967029   24564 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from cached tarball
	I0401 19:46:18.967063   24564 cache.go:230] Successfully downloaded all kic artifacts
	I0401 19:46:18.967100   24564 start.go:360] acquireMachinesLock for addons-649141: {Name:mk671d04dbe0c43149ca90db9a3c513bc30d187c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 19:46:18.967200   24564 start.go:364] duration metric: took 75.513µs to acquireMachinesLock for "addons-649141"
	I0401 19:46:18.967229   24564 start.go:93] Provisioning new machine with config: &{Name:addons-649141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-649141 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:46:18.967298   24564 start.go:125] createHost starting for "" (driver="docker")
	I0401 19:46:18.969050   24564 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0401 19:46:18.969283   24564 start.go:159] libmachine.API.Create for "addons-649141" (driver="docker")
	I0401 19:46:18.969315   24564 client.go:168] LocalClient.Create starting
	I0401 19:46:18.969468   24564 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 19:46:19.162158   24564 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 19:46:19.185462   24564 cli_runner.go:164] Run: docker network inspect addons-649141 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 19:46:19.201605   24564 cli_runner.go:211] docker network inspect addons-649141 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 19:46:19.201687   24564 network_create.go:284] running [docker network inspect addons-649141] to gather additional debugging logs...
	I0401 19:46:19.201708   24564 cli_runner.go:164] Run: docker network inspect addons-649141
	W0401 19:46:19.216938   24564 cli_runner.go:211] docker network inspect addons-649141 returned with exit code 1
	I0401 19:46:19.216965   24564 network_create.go:287] error running [docker network inspect addons-649141]: docker network inspect addons-649141: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-649141 not found
	I0401 19:46:19.216977   24564 network_create.go:289] output of [docker network inspect addons-649141]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-649141 not found
	
	** /stderr **
	I0401 19:46:19.217076   24564 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 19:46:19.233632   24564 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e09280}
	I0401 19:46:19.233667   24564 network_create.go:124] attempt to create docker network addons-649141 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0401 19:46:19.233716   24564 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-649141 addons-649141
	I0401 19:46:19.282995   24564 network_create.go:108] docker network addons-649141 192.168.49.0/24 created
	I0401 19:46:19.283027   24564 kic.go:121] calculated static IP "192.168.49.2" for the "addons-649141" container
	I0401 19:46:19.283078   24564 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 19:46:19.298766   24564 cli_runner.go:164] Run: docker volume create addons-649141 --label name.minikube.sigs.k8s.io=addons-649141 --label created_by.minikube.sigs.k8s.io=true
	I0401 19:46:19.315183   24564 oci.go:103] Successfully created a docker volume addons-649141
	I0401 19:46:19.315260   24564 cli_runner.go:164] Run: docker run --rm --name addons-649141-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-649141 --entrypoint /usr/bin/test -v addons-649141:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 19:46:24.248529   24564 cli_runner.go:217] Completed: docker run --rm --name addons-649141-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-649141 --entrypoint /usr/bin/test -v addons-649141:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib: (4.933187768s)
	I0401 19:46:24.248569   24564 oci.go:107] Successfully prepared a docker volume addons-649141
	I0401 19:46:24.248624   24564 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:46:24.248647   24564 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 19:46:24.248730   24564 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-649141:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 19:46:28.703364   24564 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-649141:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.454590001s)
	I0401 19:46:28.703396   24564 kic.go:203] duration metric: took 4.454744249s to extract preloaded images to volume ...
	W0401 19:46:28.703561   24564 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 19:46:28.703711   24564 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 19:46:28.749650   24564 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-649141 --name addons-649141 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-649141 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-649141 --network addons-649141 --ip 192.168.49.2 --volume addons-649141:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 19:46:29.020896   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Running}}
	I0401 19:46:29.039452   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:29.057663   24564 cli_runner.go:164] Run: docker exec addons-649141 stat /var/lib/dpkg/alternatives/iptables
	I0401 19:46:29.102350   24564 oci.go:144] the created container "addons-649141" has a running status.
	I0401 19:46:29.102384   24564 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa...
	I0401 19:46:29.183757   24564 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 19:46:29.203758   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:29.220151   24564 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 19:46:29.220177   24564 kic_runner.go:114] Args: [docker exec --privileged addons-649141 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 19:46:29.267773   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:29.284238   24564 machine.go:93] provisionDockerMachine start ...
	I0401 19:46:29.284331   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:29.303336   24564 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:29.303648   24564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0401 19:46:29.303669   24564 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 19:46:29.304543   24564 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51114->127.0.0.1:32768: read: connection reset by peer
	I0401 19:46:32.437369   24564 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649141
	
	I0401 19:46:32.437400   24564 ubuntu.go:169] provisioning hostname "addons-649141"
	I0401 19:46:32.437462   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:32.455434   24564 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:32.455632   24564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0401 19:46:32.455644   24564 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-649141 && echo "addons-649141" | sudo tee /etc/hostname
	I0401 19:46:32.595816   24564 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-649141
	
	I0401 19:46:32.595907   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:32.612563   24564 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:32.612772   24564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0401 19:46:32.612788   24564 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-649141' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-649141/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-649141' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 19:46:32.741863   24564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 19:46:32.741890   24564 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 19:46:32.741917   24564 ubuntu.go:177] setting up certificates
	I0401 19:46:32.741931   24564 provision.go:84] configureAuth start
	I0401 19:46:32.741999   24564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-649141
	I0401 19:46:32.758441   24564 provision.go:143] copyHostCerts
	I0401 19:46:32.758507   24564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 19:46:32.758627   24564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 19:46:32.758705   24564 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 19:46:32.758766   24564 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.addons-649141 san=[127.0.0.1 192.168.49.2 addons-649141 localhost minikube]
	I0401 19:46:32.843305   24564 provision.go:177] copyRemoteCerts
	I0401 19:46:32.843352   24564 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 19:46:32.843396   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:32.859754   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:32.953826   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 19:46:32.974552   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0401 19:46:32.995273   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 19:46:33.016576   24564 provision.go:87] duration metric: took 274.628234ms to configureAuth
	I0401 19:46:33.016601   24564 ubuntu.go:193] setting minikube options for container-runtime
	I0401 19:46:33.016774   24564 config.go:182] Loaded profile config "addons-649141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:33.016877   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:33.034619   24564 main.go:141] libmachine: Using SSH client type: native
	I0401 19:46:33.034819   24564 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0401 19:46:33.034834   24564 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 19:46:33.248853   24564 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 19:46:33.248895   24564 machine.go:96] duration metric: took 3.964629995s to provisionDockerMachine
	I0401 19:46:33.248908   24564 client.go:171] duration metric: took 14.279582028s to LocalClient.Create
	I0401 19:46:33.248929   24564 start.go:167] duration metric: took 14.279644802s to libmachine.API.Create "addons-649141"
	I0401 19:46:33.248940   24564 start.go:293] postStartSetup for "addons-649141" (driver="docker")
	I0401 19:46:33.248952   24564 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 19:46:33.249013   24564 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 19:46:33.249059   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:33.267193   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:33.366285   24564 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 19:46:33.369145   24564 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 19:46:33.369172   24564 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 19:46:33.369184   24564 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 19:46:33.369192   24564 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 19:46:33.369203   24564 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 19:46:33.369277   24564 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 19:46:33.369314   24564 start.go:296] duration metric: took 120.366923ms for postStartSetup
	I0401 19:46:33.369653   24564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-649141
	I0401 19:46:33.386296   24564 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/config.json ...
	I0401 19:46:33.386570   24564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 19:46:33.386619   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:33.405513   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:33.498314   24564 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 19:46:33.502200   24564 start.go:128] duration metric: took 14.53488574s to createHost
	I0401 19:46:33.502225   24564 start.go:83] releasing machines lock for "addons-649141", held for 14.535011258s
	I0401 19:46:33.502275   24564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-649141
	I0401 19:46:33.518319   24564 ssh_runner.go:195] Run: cat /version.json
	I0401 19:46:33.518365   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:33.518412   24564 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 19:46:33.518466   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:33.537421   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:33.537516   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:33.699459   24564 ssh_runner.go:195] Run: systemctl --version
	I0401 19:46:33.703353   24564 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 19:46:33.838467   24564 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 19:46:33.842525   24564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:46:33.859748   24564 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 19:46:33.859827   24564 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 19:46:33.884196   24564 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 19:46:33.884217   24564 start.go:495] detecting cgroup driver to use...
	I0401 19:46:33.884246   24564 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 19:46:33.884288   24564 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 19:46:33.897297   24564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 19:46:33.906625   24564 docker.go:217] disabling cri-docker service (if available) ...
	I0401 19:46:33.906667   24564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 19:46:33.918067   24564 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 19:46:33.929976   24564 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 19:46:34.011532   24564 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 19:46:34.094713   24564 docker.go:233] disabling docker service ...
	I0401 19:46:34.094791   24564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 19:46:34.111649   24564 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 19:46:34.121929   24564 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 19:46:34.195308   24564 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 19:46:34.271405   24564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 19:46:34.281342   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 19:46:34.295408   24564 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 19:46:34.295466   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.303975   24564 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 19:46:34.304026   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.312782   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.321547   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.330148   24564 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 19:46:34.338176   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.346702   24564 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.360400   24564 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 19:46:34.368628   24564 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 19:46:34.375927   24564 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0401 19:46:34.375982   24564 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0401 19:46:34.387960   24564 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 19:46:34.395362   24564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:34.465954   24564 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 19:46:34.566448   24564 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 19:46:34.566508   24564 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 19:46:34.569825   24564 start.go:563] Will wait 60s for crictl version
	I0401 19:46:34.569890   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:46:34.573115   24564 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 19:46:34.603711   24564 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 19:46:34.603814   24564 ssh_runner.go:195] Run: crio --version
	I0401 19:46:34.636121   24564 ssh_runner.go:195] Run: crio --version
	I0401 19:46:34.671292   24564 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 19:46:34.672677   24564 cli_runner.go:164] Run: docker network inspect addons-649141 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 19:46:34.688974   24564 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0401 19:46:34.692379   24564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:46:34.702550   24564 kubeadm.go:883] updating cluster {Name:addons-649141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-649141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 19:46:34.702658   24564 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:46:34.702705   24564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:46:34.763495   24564 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:46:34.763524   24564 crio.go:433] Images already preloaded, skipping extraction
	I0401 19:46:34.763573   24564 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 19:46:34.794555   24564 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 19:46:34.794578   24564 cache_images.go:84] Images are preloaded, skipping loading
	I0401 19:46:34.794586   24564 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 crio true true} ...
	I0401 19:46:34.794658   24564 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-649141 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-649141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 19:46:34.794717   24564 ssh_runner.go:195] Run: crio config
	I0401 19:46:34.834664   24564 cni.go:84] Creating CNI manager for ""
	I0401 19:46:34.834691   24564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 19:46:34.834711   24564 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 19:46:34.834737   24564 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-649141 NodeName:addons-649141 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 19:46:34.834892   24564 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-649141"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 19:46:34.834954   24564 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 19:46:34.842902   24564 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 19:46:34.842967   24564 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 19:46:34.850370   24564 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0401 19:46:34.865270   24564 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 19:46:34.880501   24564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0401 19:46:34.895891   24564 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0401 19:46:34.898817   24564 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 19:46:34.908231   24564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:34.980049   24564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:46:34.991936   24564 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141 for IP: 192.168.49.2
	I0401 19:46:34.991963   24564 certs.go:194] generating shared ca certs ...
	I0401 19:46:34.991980   24564 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:34.992098   24564 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 19:46:35.442230   24564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt ...
	I0401 19:46:35.442263   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt: {Name:mk70ca4db767661f2580e9acfb42c52810b9d54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.442433   24564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key ...
	I0401 19:46:35.442444   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key: {Name:mkf35ce0131e2667bda2eedecc72528efff082ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.442511   24564 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 19:46:35.648949   24564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt ...
	I0401 19:46:35.648979   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt: {Name:mk1a70d8e4490d15031b8eb5707f54cf65af4596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.649132   24564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key ...
	I0401 19:46:35.649142   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key: {Name:mk641b07f028b7dd47f620d770f92a4ec8328c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.649215   24564 certs.go:256] generating profile certs ...
	I0401 19:46:35.649266   24564 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.key
	I0401 19:46:35.649286   24564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt with IP's: []
	I0401 19:46:35.960938   24564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt ...
	I0401 19:46:35.960969   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: {Name:mk61e82b3c62ae534f1e2755341e6e1afa604576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.961137   24564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.key ...
	I0401 19:46:35.961147   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.key: {Name:mkb13d5db0d8f2ac7366a67fdfe365111ce48f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:35.961217   24564 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key.b4fa6c52
	I0401 19:46:35.961235   24564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt.b4fa6c52 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0401 19:46:36.677027   24564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt.b4fa6c52 ...
	I0401 19:46:36.677054   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt.b4fa6c52: {Name:mk63dd39ae52575e045f83c84f447cbd05e419d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:36.677217   24564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key.b4fa6c52 ...
	I0401 19:46:36.677230   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key.b4fa6c52: {Name:mkada7e3ea7c443101382f405e21136b73028c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:36.677299   24564 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt.b4fa6c52 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt
	I0401 19:46:36.677385   24564 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key.b4fa6c52 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key
	I0401 19:46:36.677434   24564 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.key
	I0401 19:46:36.677451   24564 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.crt with IP's: []
	I0401 19:46:36.885132   24564 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.crt ...
	I0401 19:46:36.885162   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.crt: {Name:mk49e1c39e676dcef8c6aa53f555ed6c5057295c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:36.885325   24564 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.key ...
	I0401 19:46:36.885336   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.key: {Name:mkec548541874c1e756813d98d65870567f993a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:36.885496   24564 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 19:46:36.885527   24564 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 19:46:36.885553   24564 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 19:46:36.885575   24564 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 19:46:36.886126   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 19:46:36.907768   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 19:46:36.927965   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 19:46:36.947931   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 19:46:36.968289   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0401 19:46:36.988507   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 19:46:37.008851   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 19:46:37.029549   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 19:46:37.050463   24564 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 19:46:37.070623   24564 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 19:46:37.085577   24564 ssh_runner.go:195] Run: openssl version
	I0401 19:46:37.090311   24564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 19:46:37.098553   24564 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:37.101553   24564 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:37.101613   24564 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 19:46:37.107499   24564 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 19:46:37.115651   24564 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 19:46:37.118549   24564 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 19:46:37.118592   24564 kubeadm.go:392] StartCluster: {Name:addons-649141 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-649141 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:46:37.118672   24564 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 19:46:37.118707   24564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 19:46:37.150790   24564 cri.go:89] found id: ""
	I0401 19:46:37.150875   24564 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 19:46:37.158707   24564 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 19:46:37.166275   24564 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 19:46:37.166314   24564 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 19:46:37.173572   24564 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 19:46:37.173596   24564 kubeadm.go:157] found existing configuration files:
	
	I0401 19:46:37.173629   24564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 19:46:37.180838   24564 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 19:46:37.180883   24564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 19:46:37.188034   24564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 19:46:37.195232   24564 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 19:46:37.195272   24564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 19:46:37.202313   24564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 19:46:37.209500   24564 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 19:46:37.209547   24564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 19:46:37.216661   24564 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 19:46:37.223997   24564 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 19:46:37.224040   24564 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 19:46:37.231171   24564 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 19:46:37.283883   24564 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 19:46:37.284181   24564 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 19:46:37.333763   24564 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 19:46:46.182550   24564 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 19:46:46.182641   24564 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 19:46:46.182750   24564 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 19:46:46.182823   24564 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 19:46:46.182856   24564 kubeadm.go:310] OS: Linux
	I0401 19:46:46.182921   24564 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 19:46:46.182982   24564 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 19:46:46.183050   24564 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 19:46:46.183118   24564 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 19:46:46.183191   24564 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 19:46:46.183269   24564 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 19:46:46.183355   24564 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 19:46:46.183516   24564 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 19:46:46.183584   24564 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 19:46:46.183687   24564 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 19:46:46.183809   24564 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 19:46:46.183947   24564 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 19:46:46.184051   24564 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 19:46:46.185712   24564 out.go:235]   - Generating certificates and keys ...
	I0401 19:46:46.185813   24564 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 19:46:46.185912   24564 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 19:46:46.185999   24564 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 19:46:46.186102   24564 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 19:46:46.186216   24564 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 19:46:46.186286   24564 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 19:46:46.186370   24564 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 19:46:46.186561   24564 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-649141 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0401 19:46:46.186648   24564 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 19:46:46.186817   24564 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-649141 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0401 19:46:46.186930   24564 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 19:46:46.187041   24564 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 19:46:46.187101   24564 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 19:46:46.187170   24564 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 19:46:46.187255   24564 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 19:46:46.187338   24564 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 19:46:46.187444   24564 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 19:46:46.187628   24564 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 19:46:46.187714   24564 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 19:46:46.187826   24564 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 19:46:46.187889   24564 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 19:46:46.189083   24564 out.go:235]   - Booting up control plane ...
	I0401 19:46:46.189160   24564 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 19:46:46.189231   24564 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 19:46:46.189285   24564 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 19:46:46.189427   24564 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 19:46:46.189559   24564 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 19:46:46.189617   24564 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 19:46:46.189796   24564 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 19:46:46.189944   24564 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 19:46:46.190016   24564 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000933725s
	I0401 19:46:46.190079   24564 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 19:46:46.190131   24564 kubeadm.go:310] [api-check] The API server is healthy after 4.001791628s
	I0401 19:46:46.190221   24564 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 19:46:46.190325   24564 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 19:46:46.190372   24564 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 19:46:46.190533   24564 kubeadm.go:310] [mark-control-plane] Marking the node addons-649141 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 19:46:46.190590   24564 kubeadm.go:310] [bootstrap-token] Using token: xuovv1.d7crqh038ts8qees
	I0401 19:46:46.192029   24564 out.go:235]   - Configuring RBAC rules ...
	I0401 19:46:46.192132   24564 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 19:46:46.192203   24564 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 19:46:46.192323   24564 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 19:46:46.192457   24564 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 19:46:46.192620   24564 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 19:46:46.192751   24564 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 19:46:46.192889   24564 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 19:46:46.192929   24564 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 19:46:46.192968   24564 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 19:46:46.192974   24564 kubeadm.go:310] 
	I0401 19:46:46.193063   24564 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 19:46:46.193070   24564 kubeadm.go:310] 
	I0401 19:46:46.193156   24564 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 19:46:46.193164   24564 kubeadm.go:310] 
	I0401 19:46:46.193185   24564 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 19:46:46.193237   24564 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 19:46:46.193300   24564 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 19:46:46.193310   24564 kubeadm.go:310] 
	I0401 19:46:46.193354   24564 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 19:46:46.193359   24564 kubeadm.go:310] 
	I0401 19:46:46.193398   24564 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 19:46:46.193404   24564 kubeadm.go:310] 
	I0401 19:46:46.193450   24564 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 19:46:46.193516   24564 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 19:46:46.193574   24564 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 19:46:46.193580   24564 kubeadm.go:310] 
	I0401 19:46:46.193649   24564 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 19:46:46.193716   24564 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 19:46:46.193722   24564 kubeadm.go:310] 
	I0401 19:46:46.193817   24564 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xuovv1.d7crqh038ts8qees \
	I0401 19:46:46.193947   24564 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 19:46:46.193980   24564 kubeadm.go:310] 	--control-plane 
	I0401 19:46:46.193986   24564 kubeadm.go:310] 
	I0401 19:46:46.194128   24564 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 19:46:46.194143   24564 kubeadm.go:310] 
	I0401 19:46:46.194229   24564 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xuovv1.d7crqh038ts8qees \
	I0401 19:46:46.194331   24564 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 19:46:46.194345   24564 cni.go:84] Creating CNI manager for ""
	I0401 19:46:46.194354   24564 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 19:46:46.195746   24564 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 19:46:46.196766   24564 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 19:46:46.200368   24564 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 19:46:46.200383   24564 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 19:46:46.216111   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 19:46:46.409731   24564 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 19:46:46.409825   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:46.409944   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-649141 minikube.k8s.io/updated_at=2025_04_01T19_46_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=addons-649141 minikube.k8s.io/primary=true
	I0401 19:46:46.526361   24564 ops.go:34] apiserver oom_adj: -16
	I0401 19:46:46.526462   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:47.027552   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:47.527509   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:48.026826   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:48.527254   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:49.026573   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:49.526518   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:50.026675   24564 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 19:46:50.086308   24564 kubeadm.go:1113] duration metric: took 3.676561636s to wait for elevateKubeSystemPrivileges
	I0401 19:46:50.086347   24564 kubeadm.go:394] duration metric: took 12.967760024s to StartCluster
	I0401 19:46:50.086364   24564 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:50.086521   24564 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:46:50.087036   24564 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:46:50.087238   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 19:46:50.087309   24564 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 19:46:50.087337   24564 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0401 19:46:50.087450   24564 addons.go:69] Setting yakd=true in profile "addons-649141"
	I0401 19:46:50.087459   24564 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-649141"
	I0401 19:46:50.087479   24564 addons.go:238] Setting addon yakd=true in "addons-649141"
	I0401 19:46:50.087481   24564 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-649141"
	I0401 19:46:50.087492   24564 addons.go:69] Setting registry=true in profile "addons-649141"
	I0401 19:46:50.087508   24564 config.go:182] Loaded profile config "addons-649141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:50.087513   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.087515   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.087531   24564 addons.go:238] Setting addon registry=true in "addons-649141"
	I0401 19:46:50.087521   24564 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-649141"
	I0401 19:46:50.087546   24564 addons.go:69] Setting storage-provisioner=true in profile "addons-649141"
	I0401 19:46:50.087560   24564 addons.go:238] Setting addon storage-provisioner=true in "addons-649141"
	I0401 19:46:50.087522   24564 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-649141"
	I0401 19:46:50.087584   24564 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-649141"
	I0401 19:46:50.087590   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.087593   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.087688   24564 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-649141"
	I0401 19:46:50.087743   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.087979   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.088119   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.088238   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.088249   24564 addons.go:69] Setting ingress-dns=true in profile "addons-649141"
	I0401 19:46:50.088273   24564 addons.go:238] Setting addon ingress-dns=true in "addons-649141"
	I0401 19:46:50.088307   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.088770   24564 addons.go:69] Setting volcano=true in profile "addons-649141"
	I0401 19:46:50.088792   24564 addons.go:238] Setting addon volcano=true in "addons-649141"
	I0401 19:46:50.088824   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.089025   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.088241   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.089166   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.096588   24564 addons.go:69] Setting cloud-spanner=true in profile "addons-649141"
	I0401 19:46:50.096636   24564 addons.go:238] Setting addon cloud-spanner=true in "addons-649141"
	I0401 19:46:50.096678   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.097309   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.087451   24564 addons.go:69] Setting ingress=true in profile "addons-649141"
	I0401 19:46:50.097585   24564 addons.go:238] Setting addon ingress=true in "addons-649141"
	I0401 19:46:50.097635   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.097895   24564 addons.go:69] Setting volumesnapshots=true in profile "addons-649141"
	I0401 19:46:50.097916   24564 addons.go:238] Setting addon volumesnapshots=true in "addons-649141"
	I0401 19:46:50.097945   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.098221   24564 addons.go:69] Setting inspektor-gadget=true in profile "addons-649141"
	I0401 19:46:50.098253   24564 addons.go:238] Setting addon inspektor-gadget=true in "addons-649141"
	I0401 19:46:50.098287   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.098498   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.098797   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.098874   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.100687   24564 out.go:177] * Verifying Kubernetes components...
	I0401 19:46:50.101477   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.101811   24564 addons.go:69] Setting default-storageclass=true in profile "addons-649141"
	I0401 19:46:50.101901   24564 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-649141"
	I0401 19:46:50.101998   24564 addons.go:69] Setting gcp-auth=true in profile "addons-649141"
	I0401 19:46:50.102021   24564 mustload.go:65] Loading cluster: addons-649141
	I0401 19:46:50.102247   24564 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 19:46:50.102442   24564 config.go:182] Loaded profile config "addons-649141": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:46:50.102485   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.102739   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.117108   24564 addons.go:69] Setting metrics-server=true in profile "addons-649141"
	I0401 19:46:50.117163   24564 addons.go:238] Setting addon metrics-server=true in "addons-649141"
	I0401 19:46:50.117210   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.118929   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.120091   24564 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-649141"
	I0401 19:46:50.120119   24564 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-649141"
	I0401 19:46:50.120157   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.120862   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.121932   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.122799   24564 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0401 19:46:50.124140   24564 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0401 19:46:50.124189   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0401 19:46:50.124242   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.129293   24564 out.go:177]   - Using image docker.io/registry:2.8.3
	I0401 19:46:50.129464   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0401 19:46:50.130553   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0401 19:46:50.130634   24564 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0401 19:46:50.131909   24564 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0401 19:46:50.131941   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0401 19:46:50.131993   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.132243   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0401 19:46:50.134167   24564 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 19:46:50.135643   24564 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0401 19:46:50.135681   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0401 19:46:50.135836   24564 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:46:50.135857   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 19:46:50.135913   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.136807   24564 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 19:46:50.136827   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0401 19:46:50.136881   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.137779   24564 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0401 19:46:50.138748   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0401 19:46:50.138990   24564 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0401 19:46:50.139007   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0401 19:46:50.139090   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.166472   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0401 19:46:50.166610   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0401 19:46:50.167070   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.167110   24564 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-649141"
	I0401 19:46:50.167146   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.167811   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.167986   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0401 19:46:50.167999   24564 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0401 19:46:50.168051   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.169008   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0401 19:46:50.170198   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.172136   24564 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0401 19:46:50.175024   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0401 19:46:50.175043   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0401 19:46:50.175141   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.176632   24564 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0401 19:46:50.178968   24564 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0401 19:46:50.178988   24564 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0401 19:46:50.186516   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	W0401 19:46:50.194193   24564 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0401 19:46:50.201568   24564 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0401 19:46:50.201933   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.207646   24564 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0401 19:46:50.207670   24564 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0401 19:46:50.207737   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.210179   24564 addons.go:238] Setting addon default-storageclass=true in "addons-649141"
	I0401 19:46:50.210325   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:50.211628   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:50.213030   24564 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0401 19:46:50.215288   24564 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:50.216534   24564 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:50.217845   24564 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0401 19:46:50.217953   24564 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0401 19:46:50.218094   24564 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 19:46:50.218120   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0401 19:46:50.218207   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.218913   24564 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 19:46:50.218928   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0401 19:46:50.218979   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.219111   24564 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 19:46:50.219119   24564 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 19:46:50.219153   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.220195   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.250539   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.250539   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.250948   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.251104   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.252413   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.253122   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 19:46:50.253359   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.253809   24564 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0401 19:46:50.257797   24564 out.go:177]   - Using image docker.io/busybox:stable
	I0401 19:46:50.257905   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.261791   24564 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 19:46:50.261813   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0401 19:46:50.261869   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.271767   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.271853   24564 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 19:46:50.271870   24564 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 19:46:50.271911   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:50.279280   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.284809   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.293210   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:50.434976   24564 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 19:46:50.631358   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0401 19:46:50.632284   24564 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0401 19:46:50.632315   24564 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0401 19:46:50.634569   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 19:46:50.637630   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0401 19:46:50.731693   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0401 19:46:50.735850   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0401 19:46:50.819476   24564 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 19:46:50.819569   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0401 19:46:50.825403   24564 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0401 19:46:50.825480   24564 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0401 19:46:50.829047   24564 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0401 19:46:50.829126   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0401 19:46:50.838490   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0401 19:46:50.840102   24564 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0401 19:46:50.840126   24564 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0401 19:46:50.921209   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0401 19:46:50.921311   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0401 19:46:50.922157   24564 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0401 19:46:50.922227   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0401 19:46:50.928051   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 19:46:51.033912   24564 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0401 19:46:51.033991   24564 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0401 19:46:51.040394   24564 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 19:46:51.040419   24564 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 19:46:51.126681   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0401 19:46:51.133407   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0401 19:46:51.133527   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0401 19:46:51.328191   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0401 19:46:51.328221   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0401 19:46:51.330774   24564 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0401 19:46:51.330849   24564 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0401 19:46:51.331417   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0401 19:46:51.331665   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0401 19:46:51.433821   24564 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0401 19:46:51.433854   24564 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0401 19:46:51.528175   24564 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:46:51.528274   24564 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 19:46:51.626403   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0401 19:46:51.626485   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0401 19:46:51.729364   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 19:46:51.731007   24564 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0401 19:46:51.731076   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0401 19:46:51.836643   24564 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0401 19:46:51.836750   24564 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0401 19:46:52.225157   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0401 19:46:52.230814   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0401 19:46:52.230847   24564 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0401 19:46:52.332612   24564 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.079445865s)
	I0401 19:46:52.332676   24564 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0401 19:46:52.334083   24564 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.899021224s)
	I0401 19:46:52.335094   24564 node_ready.go:35] waiting up to 6m0s for node "addons-649141" to be "Ready" ...
	I0401 19:46:52.426200   24564 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0401 19:46:52.426300   24564 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0401 19:46:52.633557   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0401 19:46:52.633636   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0401 19:46:52.834816   24564 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:52.834843   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0401 19:46:52.918844   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0401 19:46:52.918929   24564 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0401 19:46:52.942602   24564 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-649141" context rescaled to 1 replicas
	I0401 19:46:53.223994   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:53.230233   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0401 19:46:53.230336   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0401 19:46:53.537849   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0401 19:46:53.537877   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0401 19:46:53.818422   24564 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 19:46:53.818452   24564 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0401 19:46:53.941200   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0401 19:46:54.122734   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.491337065s)
	I0401 19:46:54.339441   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:46:54.537124   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.902515221s)
	I0401 19:46:54.537215   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.89953197s)
	I0401 19:46:54.537247   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.805470199s)
	I0401 19:46:56.147336   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.411440273s)
	I0401 19:46:56.147376   24564 addons.go:479] Verifying addon ingress=true in "addons-649141"
	I0401 19:46:56.147383   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.308794631s)
	I0401 19:46:56.147521   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.020751957s)
	I0401 19:46:56.147463   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.219329198s)
	I0401 19:46:56.147764   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.816066064s)
	I0401 19:46:56.147793   24564 addons.go:479] Verifying addon registry=true in "addons-649141"
	I0401 19:46:56.148082   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.816624805s)
	I0401 19:46:56.148161   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.418705859s)
	I0401 19:46:56.148181   24564 addons.go:479] Verifying addon metrics-server=true in "addons-649141"
	I0401 19:46:56.148242   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.922995192s)
	I0401 19:46:56.149202   24564 out.go:177] * Verifying ingress addon...
	I0401 19:46:56.150060   24564 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-649141 service yakd-dashboard -n yakd-dashboard
	
	I0401 19:46:56.150107   24564 out.go:177] * Verifying registry addon...
	I0401 19:46:56.151601   24564 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0401 19:46:56.152059   24564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0401 19:46:56.219682   24564 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 19:46:56.219710   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:56.219825   24564 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0401 19:46:56.219844   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0401 19:46:56.223428   24564 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0401 19:46:56.722421   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:56.740879   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.516831953s)
	W0401 19:46:56.740937   24564 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 19:46:56.740967   24564 retry.go:31] will retry after 343.771297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0401 19:46:56.750819   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:56.839323   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:46:57.085270   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0401 19:46:57.154652   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:57.154910   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:57.221987   24564 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0401 19:46:57.222070   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:57.245840   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:57.454584   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.513328349s)
	I0401 19:46:57.454625   24564 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-649141"
	I0401 19:46:57.456153   24564 out.go:177] * Verifying csi-hostpath-driver addon...
	I0401 19:46:57.458150   24564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0401 19:46:57.522324   24564 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 19:46:57.522347   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:57.540009   24564 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0401 19:46:57.558452   24564 addons.go:238] Setting addon gcp-auth=true in "addons-649141"
	I0401 19:46:57.558520   24564 host.go:66] Checking if "addons-649141" exists ...
	I0401 19:46:57.558853   24564 cli_runner.go:164] Run: docker container inspect addons-649141 --format={{.State.Status}}
	I0401 19:46:57.575066   24564 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0401 19:46:57.575117   24564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-649141
	I0401 19:46:57.590848   24564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/addons-649141/id_rsa Username:docker}
	I0401 19:46:57.659811   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:57.659939   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:57.961259   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:58.154895   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:58.155022   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:58.461502   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:58.655341   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:58.655448   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:58.961554   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:59.154589   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:59.154847   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:59.338427   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:46:59.461224   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:46:59.654693   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:46:59.654796   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:46:59.904541   24564 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.819224823s)
	I0401 19:46:59.904567   24564 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.329474215s)
	I0401 19:46:59.906346   24564 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0401 19:46:59.907853   24564 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0401 19:46:59.909104   24564 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0401 19:46:59.909122   24564 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0401 19:46:59.925143   24564 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0401 19:46:59.925167   24564 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0401 19:46:59.940194   24564 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 19:46:59.940214   24564 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0401 19:46:59.955529   24564 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0401 19:46:59.961416   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:00.155243   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:00.155401   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:00.273733   24564 addons.go:479] Verifying addon gcp-auth=true in "addons-649141"
	I0401 19:47:00.275824   24564 out.go:177] * Verifying gcp-auth addon...
	I0401 19:47:00.277484   24564 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0401 19:47:00.279498   24564 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0401 19:47:00.279526   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:00.461044   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:00.654560   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:00.654737   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:00.780245   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:00.960828   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:01.154470   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:01.154684   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:01.280901   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:01.338472   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:47:01.461312   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:01.654987   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:01.655112   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:01.780807   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:01.960748   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:02.154362   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:02.154920   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:02.280426   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:02.461323   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:02.654988   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:02.655270   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:02.780680   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:02.961741   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:03.154603   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:03.155111   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:03.280446   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:03.462737   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:03.654595   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:03.654935   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:03.780295   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:03.837770   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:47:03.961074   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:04.155242   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:04.155360   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:04.280548   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:04.461390   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:04.655036   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:04.655133   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:04.780673   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:04.961513   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:05.154876   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:05.155044   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:05.280643   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:05.461769   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:05.654290   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:05.654419   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:05.780841   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:05.838201   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:47:05.960568   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:06.155475   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:06.155637   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:06.281024   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:06.460582   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:06.654495   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:06.654535   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:06.780854   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:06.960407   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:07.154921   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:07.154921   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:07.280432   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:07.461544   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:07.654944   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:07.655002   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:07.780328   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:07.961061   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:08.154580   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:08.154639   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:08.280839   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:08.338117   24564 node_ready.go:53] node "addons-649141" has status "Ready":"False"
	I0401 19:47:08.461846   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:08.654197   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:08.654394   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:08.780712   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:08.961569   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:09.226843   24564 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0401 19:47:09.226870   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:09.227002   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:09.319936   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:09.338834   24564 node_ready.go:49] node "addons-649141" has status "Ready":"True"
	I0401 19:47:09.338914   24564 node_ready.go:38] duration metric: took 17.003794161s for node "addons-649141" to be "Ready" ...
	I0401 19:47:09.338938   24564 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:47:09.345496   24564 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:09.460608   24564 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0401 19:47:09.460631   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:09.722867   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:09.722985   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:09.824185   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:09.961237   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:10.155086   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:10.155273   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:10.280762   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:10.461695   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:10.655300   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:10.655305   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:10.822412   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:10.961945   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:11.154574   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:11.154619   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:11.280390   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:11.350273   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:11.461387   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:11.655173   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:11.655174   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:11.780934   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:11.961946   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:12.154796   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:12.155021   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:12.279934   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:12.461540   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:12.654833   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:12.655008   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:12.780655   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:12.962027   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:13.154625   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:13.154671   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:13.279918   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:13.461651   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:13.655501   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:13.655502   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:13.779815   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:13.850308   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:13.961110   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:14.154938   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:14.155001   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:14.280330   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:14.461483   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:14.655364   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:14.655412   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:14.818906   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:14.962194   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:15.155560   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:15.155884   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:15.320294   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:15.460986   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:15.654704   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:15.654874   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:15.780390   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:15.850446   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:15.961371   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:16.155352   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:16.155395   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:16.281202   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:16.462859   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:16.654639   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:16.654664   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:16.780028   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:16.962319   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:17.154823   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:17.154940   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:17.280305   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:17.461247   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:17.655115   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:17.655316   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:17.781076   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:17.850997   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:17.961715   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:18.155941   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:18.156140   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:18.280677   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:18.462100   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:18.654935   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:18.655073   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:18.779891   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:18.961017   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:19.154522   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:19.154573   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:19.320805   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:19.461506   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:19.655166   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:19.655306   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:19.780123   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:19.852728   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:19.961673   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:20.154563   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:20.154796   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:20.280976   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:20.462082   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:20.654420   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:20.654559   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:20.780968   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:20.962021   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:21.154736   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:21.154765   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:21.280159   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:21.461834   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:21.654653   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:21.654720   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:21.779764   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:21.962310   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:22.155293   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:22.155449   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:22.280837   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:22.349894   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:22.461289   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:22.654703   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:22.654746   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:22.780010   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:22.961276   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:23.154544   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:23.154556   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:23.280171   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:23.461710   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:23.654257   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:23.654953   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:23.780309   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:23.961576   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:24.155394   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:24.155471   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:24.320593   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:24.350526   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:24.461438   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:24.655641   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:24.655683   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:24.781087   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:24.961206   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:25.154876   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:25.154903   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:25.280276   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:25.462238   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:25.655431   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:25.655528   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:25.780624   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:25.961846   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:26.155132   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:26.155171   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:26.280818   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:26.462141   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:26.655037   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:26.655100   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:26.780979   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:26.851913   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:26.961769   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:27.154328   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:27.154964   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:27.280528   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:27.462460   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:27.655299   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:27.655318   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:27.780703   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:27.962388   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:28.155410   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:28.155435   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:28.280935   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:28.461347   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:28.654877   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:28.655051   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:28.780153   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:28.961321   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:29.154945   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:29.154958   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:29.280092   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:29.349947   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:29.461646   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:29.655085   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:29.655291   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:29.780385   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:29.961894   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:30.155063   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:30.155164   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:30.280325   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:30.462422   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:30.655713   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:30.655773   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:30.780152   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:30.962212   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:31.155999   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:31.156158   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:31.280581   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:31.350690   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:31.461790   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:31.654652   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:31.654730   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:31.780052   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:31.961653   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:32.155283   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:32.155344   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:32.281006   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:32.462108   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:32.655107   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:32.655235   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:32.780536   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:32.961591   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:33.155834   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:33.156051   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:33.279978   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:33.351726   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:33.461684   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:33.654954   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:33.654970   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:33.819434   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:33.962045   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:34.154887   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:34.154997   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:34.280822   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:34.461711   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:34.654973   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:34.655069   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:34.780808   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:34.961989   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:35.154305   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:35.154518   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:35.280541   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:35.462532   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:35.655075   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:35.655141   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:35.780415   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:35.851745   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:35.961940   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:36.154578   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:36.156051   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:36.280795   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:36.461461   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:36.655290   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:36.655329   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:36.781035   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:36.961534   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:37.154992   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:37.155004   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:37.280603   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:37.460889   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:37.654365   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:37.654392   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:37.780502   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:37.961072   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:38.154409   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:38.154532   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:38.281122   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:38.350157   24564 pod_ready.go:103] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:38.461453   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:38.654904   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:38.655081   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:38.780412   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:38.961735   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:39.154113   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:39.154863   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:39.280335   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:39.460776   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:39.654214   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:39.654992   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:39.780505   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:39.961494   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:40.155003   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:40.155079   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:40.280819   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:40.349443   24564 pod_ready.go:93] pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.349467   24564 pod_ready.go:82] duration metric: took 31.003941428s for pod "amd-gpu-device-plugin-t788h" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.349477   24564 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8jzlj" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.353022   24564 pod_ready.go:93] pod "coredns-668d6bf9bc-8jzlj" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.353050   24564 pod_ready.go:82] duration metric: took 3.566052ms for pod "coredns-668d6bf9bc-8jzlj" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.353076   24564 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.358070   24564 pod_ready.go:93] pod "etcd-addons-649141" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.358088   24564 pod_ready.go:82] duration metric: took 5.002422ms for pod "etcd-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.358099   24564 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.361020   24564 pod_ready.go:93] pod "kube-apiserver-addons-649141" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.361037   24564 pod_ready.go:82] duration metric: took 2.931649ms for pod "kube-apiserver-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.361048   24564 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.364000   24564 pod_ready.go:93] pod "kube-controller-manager-addons-649141" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.364018   24564 pod_ready.go:82] duration metric: took 2.963614ms for pod "kube-controller-manager-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.364034   24564 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dm42l" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.462062   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:40.654925   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:40.655060   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:40.748377   24564 pod_ready.go:93] pod "kube-proxy-dm42l" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:40.748407   24564 pod_ready.go:82] duration metric: took 384.365293ms for pod "kube-proxy-dm42l" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.748422   24564 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:40.780820   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:40.961058   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:41.148424   24564 pod_ready.go:93] pod "kube-scheduler-addons-649141" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:41.148447   24564 pod_ready.go:82] duration metric: took 400.016041ms for pod "kube-scheduler-addons-649141" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:41.148456   24564 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-x9wfw" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:41.154665   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:41.154719   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:41.280071   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:41.461996   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:41.549271   24564 pod_ready.go:93] pod "metrics-server-7fbb699795-x9wfw" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:41.549300   24564 pod_ready.go:82] duration metric: took 400.837011ms for pod "metrics-server-7fbb699795-x9wfw" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:41.549313   24564 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:41.655180   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:41.655278   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:41.780641   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:41.961349   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:42.154498   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:42.154532   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:42.281077   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:42.462177   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:42.654539   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:42.654669   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:42.780418   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:42.961812   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:43.154659   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:43.154704   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:43.280182   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:43.461040   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:43.554054   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:43.654662   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:43.654711   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:43.779959   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:43.962068   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:44.155055   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:44.155092   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:44.280894   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:44.461226   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:44.654676   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:44.654719   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:44.779946   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:44.962016   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:45.154580   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:45.154631   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:45.279876   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:45.461644   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:45.654326   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:45.654448   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:45.780839   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:45.961590   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:46.053558   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:46.155150   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:46.155255   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:46.280971   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:46.462810   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:46.654581   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:46.654873   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:46.780648   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:46.961371   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:47.155462   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:47.155558   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:47.279864   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:47.461433   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:47.654979   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:47.655046   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:47.780403   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:47.960949   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:48.054736   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:48.154662   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:48.154975   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:48.280955   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:48.462029   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:48.654540   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:48.654920   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:48.780439   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:48.961356   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:49.222629   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:49.223096   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:49.320405   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:49.520909   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:49.720967   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:49.721026   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:49.820811   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:50.021193   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:50.123545   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:50.222430   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:50.222794   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:50.329879   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:50.520962   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:50.721877   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:50.721873   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:50.820203   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:50.960995   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:51.154191   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:51.154943   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:51.320042   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:51.462015   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:51.655528   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:51.655573   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:51.780272   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:51.961378   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:52.154629   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:52.154679   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:52.280726   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:52.461893   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:52.554413   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:52.655490   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:52.655490   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:52.780144   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:52.961155   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:53.155668   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:53.155760   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:53.320258   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:53.460816   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:53.654625   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:53.654718   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:53.780250   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:53.961368   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:54.154876   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:54.154891   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:54.280793   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:54.461726   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:54.655400   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:54.655425   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:54.780856   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:54.961857   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:55.054038   24564 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"False"
	I0401 19:47:55.154714   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:55.154739   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:55.280339   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:55.461306   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:55.654706   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:55.654736   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:55.780210   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:55.961165   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:56.154628   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:56.154852   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:56.280844   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:56.461892   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:56.655358   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:56.655402   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:56.780449   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:56.961005   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:57.054242   24564 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace has status "Ready":"True"
	I0401 19:47:57.054272   24564 pod_ready.go:82] duration metric: took 15.5049503s for pod "nvidia-device-plugin-daemonset-cfwld" in "kube-system" namespace to be "Ready" ...
	I0401 19:47:57.054299   24564 pod_ready.go:39] duration metric: took 47.715333089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0401 19:47:57.054334   24564 api_server.go:52] waiting for apiserver process to appear ...
	I0401 19:47:57.054370   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:47:57.054434   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:47:57.089284   24564 cri.go:89] found id: "ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:47:57.089308   24564 cri.go:89] found id: ""
	I0401 19:47:57.089317   24564 logs.go:282] 1 containers: [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299]
	I0401 19:47:57.089366   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.092902   24564 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:47:57.092958   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:47:57.125565   24564 cri.go:89] found id: "b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:47:57.125585   24564 cri.go:89] found id: ""
	I0401 19:47:57.125592   24564 logs.go:282] 1 containers: [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef]
	I0401 19:47:57.125631   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.129162   24564 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:47:57.129220   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:47:57.155202   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:57.155383   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:57.163746   24564 cri.go:89] found id: "4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:47:57.163765   24564 cri.go:89] found id: ""
	I0401 19:47:57.163775   24564 logs.go:282] 1 containers: [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3]
	I0401 19:47:57.163821   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.167207   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:47:57.167269   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:47:57.199108   24564 cri.go:89] found id: "3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:47:57.199130   24564 cri.go:89] found id: ""
	I0401 19:47:57.199140   24564 logs.go:282] 1 containers: [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07]
	I0401 19:47:57.199217   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.202569   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:47:57.202638   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:47:57.236370   24564 cri.go:89] found id: "8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:47:57.236394   24564 cri.go:89] found id: ""
	I0401 19:47:57.236404   24564 logs.go:282] 1 containers: [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db]
	I0401 19:47:57.236452   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.239864   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:47:57.239930   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:47:57.272913   24564 cri.go:89] found id: "66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:47:57.272937   24564 cri.go:89] found id: ""
	I0401 19:47:57.272946   24564 logs.go:282] 1 containers: [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca]
	I0401 19:47:57.272987   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.276807   24564 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:47:57.276870   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:47:57.280077   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:57.309833   24564 cri.go:89] found id: "867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:47:57.309876   24564 cri.go:89] found id: ""
	I0401 19:47:57.309886   24564 logs.go:282] 1 containers: [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da]
	I0401 19:47:57.309930   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:47:57.313084   24564 logs.go:123] Gathering logs for kubelet ...
	I0401 19:47:57.313105   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:47:57.404277   24564 logs.go:123] Gathering logs for dmesg ...
	I0401 19:47:57.404315   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:47:57.416344   24564 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:47:57.416374   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:47:57.461532   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:57.549467   24564 logs.go:123] Gathering logs for kube-apiserver [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299] ...
	I0401 19:47:57.549510   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:47:57.636432   24564 logs.go:123] Gathering logs for coredns [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3] ...
	I0401 19:47:57.636468   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:47:57.655052   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:57.655193   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:57.687449   24564 logs.go:123] Gathering logs for kube-controller-manager [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca] ...
	I0401 19:47:57.687481   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:47:57.781236   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:57.782542   24564 logs.go:123] Gathering logs for kindnet [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da] ...
	I0401 19:47:57.782574   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:47:57.843076   24564 logs.go:123] Gathering logs for etcd [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef] ...
	I0401 19:47:57.843112   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:47:57.894908   24564 logs.go:123] Gathering logs for kube-scheduler [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07] ...
	I0401 19:47:57.894944   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:47:57.961502   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:57.962849   24564 logs.go:123] Gathering logs for kube-proxy [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db] ...
	I0401 19:47:57.962876   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:47:58.031638   24564 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:47:58.031677   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:47:58.111185   24564 logs.go:123] Gathering logs for container status ...
	I0401 19:47:58.111222   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:47:58.154959   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:58.155073   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:58.281276   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:58.461697   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:58.654723   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:58.654868   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:58.780252   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:58.961097   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:59.154974   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:59.155005   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:59.280392   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:59.461314   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:47:59.654930   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:47:59.654951   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:47:59.780257   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:47:59.961125   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:00.154627   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:00.154828   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:00.280142   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:00.461269   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:00.654891   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:00.655052   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:00.660968   24564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:48:00.673228   24564 api_server.go:72] duration metric: took 1m10.585880816s to wait for apiserver process to appear ...
	I0401 19:48:00.673256   24564 api_server.go:88] waiting for apiserver healthz status ...
	I0401 19:48:00.673298   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:48:00.673353   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:48:00.707357   24564 cri.go:89] found id: "ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:48:00.707378   24564 cri.go:89] found id: ""
	I0401 19:48:00.707384   24564 logs.go:282] 1 containers: [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299]
	I0401 19:48:00.707435   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.711009   24564 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:48:00.711072   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:48:00.743678   24564 cri.go:89] found id: "b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:48:00.743696   24564 cri.go:89] found id: ""
	I0401 19:48:00.743702   24564 logs.go:282] 1 containers: [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef]
	I0401 19:48:00.743739   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.747045   24564 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:48:00.747104   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:48:00.778393   24564 cri.go:89] found id: "4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:48:00.778412   24564 cri.go:89] found id: ""
	I0401 19:48:00.778419   24564 logs.go:282] 1 containers: [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3]
	I0401 19:48:00.778456   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.780796   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:00.781958   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:48:00.782021   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:48:00.814636   24564 cri.go:89] found id: "3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:48:00.814666   24564 cri.go:89] found id: ""
	I0401 19:48:00.814675   24564 logs.go:282] 1 containers: [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07]
	I0401 19:48:00.814732   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.817999   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:48:00.818056   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:48:00.850174   24564 cri.go:89] found id: "8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:48:00.850196   24564 cri.go:89] found id: ""
	I0401 19:48:00.850203   24564 logs.go:282] 1 containers: [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db]
	I0401 19:48:00.850265   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.853516   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:48:00.853575   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:48:00.886849   24564 cri.go:89] found id: "66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:48:00.886875   24564 cri.go:89] found id: ""
	I0401 19:48:00.886883   24564 logs.go:282] 1 containers: [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca]
	I0401 19:48:00.886938   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.890211   24564 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:48:00.890286   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:48:00.923782   24564 cri.go:89] found id: "867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:48:00.923803   24564 cri.go:89] found id: ""
	I0401 19:48:00.923810   24564 logs.go:282] 1 containers: [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da]
	I0401 19:48:00.923854   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:00.927012   24564 logs.go:123] Gathering logs for dmesg ...
	I0401 19:48:00.927039   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:48:00.938216   24564 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:48:00.938240   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:48:00.961979   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:01.024512   24564 logs.go:123] Gathering logs for kube-apiserver [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299] ...
	I0401 19:48:01.024540   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:48:01.068695   24564 logs.go:123] Gathering logs for coredns [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3] ...
	I0401 19:48:01.068728   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:48:01.119906   24564 logs.go:123] Gathering logs for kube-scheduler [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07] ...
	I0401 19:48:01.119935   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:48:01.154921   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:01.154937   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:01.161098   24564 logs.go:123] Gathering logs for kube-proxy [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db] ...
	I0401 19:48:01.161120   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:48:01.192449   24564 logs.go:123] Gathering logs for kube-controller-manager [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca] ...
	I0401 19:48:01.192473   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:48:01.247407   24564 logs.go:123] Gathering logs for container status ...
	I0401 19:48:01.247448   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:48:01.281275   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:01.289495   24564 logs.go:123] Gathering logs for kubelet ...
	I0401 19:48:01.289526   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:48:01.395519   24564 logs.go:123] Gathering logs for etcd [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef] ...
	I0401 19:48:01.395550   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:48:01.521165   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:01.547148   24564 logs.go:123] Gathering logs for kindnet [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da] ...
	I0401 19:48:01.547187   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:48:01.632579   24564 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:48:01.632604   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:48:01.654786   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:01.654842   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:01.780409   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:01.961657   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:02.154564   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:02.154942   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:02.280094   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:02.461002   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:02.654598   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:02.654699   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:02.779936   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:03.087541   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:03.155200   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:03.155393   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:03.280350   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:03.460926   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:03.654463   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:03.654515   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:03.781021   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:03.961571   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:04.155106   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:04.155168   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:04.211009   24564 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0401 19:48:04.214632   24564 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0401 19:48:04.215528   24564 api_server.go:141] control plane version: v1.32.2
	I0401 19:48:04.215550   24564 api_server.go:131] duration metric: took 3.542287295s to wait for apiserver health ...
	I0401 19:48:04.215559   24564 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 19:48:04.215578   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0401 19:48:04.215636   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0401 19:48:04.247176   24564 cri.go:89] found id: "ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:48:04.247194   24564 cri.go:89] found id: ""
	I0401 19:48:04.247201   24564 logs.go:282] 1 containers: [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299]
	I0401 19:48:04.247250   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.250472   24564 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0401 19:48:04.250532   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0401 19:48:04.280995   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:04.282761   24564 cri.go:89] found id: "b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:48:04.282777   24564 cri.go:89] found id: ""
	I0401 19:48:04.282786   24564 logs.go:282] 1 containers: [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef]
	I0401 19:48:04.282836   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.286392   24564 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0401 19:48:04.286456   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0401 19:48:04.318407   24564 cri.go:89] found id: "4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:48:04.318429   24564 cri.go:89] found id: ""
	I0401 19:48:04.318439   24564 logs.go:282] 1 containers: [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3]
	I0401 19:48:04.318489   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.321564   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0401 19:48:04.321612   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0401 19:48:04.354149   24564 cri.go:89] found id: "3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:48:04.354167   24564 cri.go:89] found id: ""
	I0401 19:48:04.354174   24564 logs.go:282] 1 containers: [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07]
	I0401 19:48:04.354221   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.357821   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0401 19:48:04.357880   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0401 19:48:04.388770   24564 cri.go:89] found id: "8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:48:04.388790   24564 cri.go:89] found id: ""
	I0401 19:48:04.388797   24564 logs.go:282] 1 containers: [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db]
	I0401 19:48:04.388842   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.392442   24564 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0401 19:48:04.392502   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0401 19:48:04.424626   24564 cri.go:89] found id: "66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:48:04.424651   24564 cri.go:89] found id: ""
	I0401 19:48:04.424658   24564 logs.go:282] 1 containers: [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca]
	I0401 19:48:04.424703   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.427868   24564 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0401 19:48:04.427932   24564 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0401 19:48:04.459432   24564 cri.go:89] found id: "867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:48:04.459452   24564 cri.go:89] found id: ""
	I0401 19:48:04.459459   24564 logs.go:282] 1 containers: [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da]
	I0401 19:48:04.459510   24564 ssh_runner.go:195] Run: which crictl
	I0401 19:48:04.461893   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:04.463122   24564 logs.go:123] Gathering logs for coredns [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3] ...
	I0401 19:48:04.463151   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3"
	I0401 19:48:04.514190   24564 logs.go:123] Gathering logs for kube-proxy [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db] ...
	I0401 19:48:04.514230   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db"
	I0401 19:48:04.547390   24564 logs.go:123] Gathering logs for CRI-O ...
	I0401 19:48:04.547416   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0401 19:48:04.626001   24564 logs.go:123] Gathering logs for container status ...
	I0401 19:48:04.626042   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0401 19:48:04.655244   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:04.655494   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:04.668809   24564 logs.go:123] Gathering logs for dmesg ...
	I0401 19:48:04.668839   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0401 19:48:04.680557   24564 logs.go:123] Gathering logs for describe nodes ...
	I0401 19:48:04.680586   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0401 19:48:04.820183   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:04.847580   24564 logs.go:123] Gathering logs for kube-apiserver [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299] ...
	I0401 19:48:04.847617   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299"
	I0401 19:48:04.964208   24564 logs.go:123] Gathering logs for etcd [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef] ...
	I0401 19:48:04.964239   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef"
	I0401 19:48:05.019963   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:05.071271   24564 logs.go:123] Gathering logs for kube-scheduler [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07] ...
	I0401 19:48:05.071318   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07"
	I0401 19:48:05.154819   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:05.154841   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:05.162397   24564 logs.go:123] Gathering logs for kube-controller-manager [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca] ...
	I0401 19:48:05.162424   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca"
	I0401 19:48:05.262504   24564 logs.go:123] Gathering logs for kindnet [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da] ...
	I0401 19:48:05.262546   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da"
	I0401 19:48:05.280471   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:05.334455   24564 logs.go:123] Gathering logs for kubelet ...
	I0401 19:48:05.334485   24564 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0401 19:48:05.461892   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:05.655206   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0401 19:48:05.655276   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:05.780556   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:05.961255   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:06.155111   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:06.155145   24564 kapi.go:107] duration metric: took 1m10.003084116s to wait for kubernetes.io/minikube-addons=registry ...
	I0401 19:48:06.280262   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:06.461370   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:06.655104   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:06.780341   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:06.960975   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:07.155222   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:07.320657   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:07.461352   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:07.655179   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:07.780690   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:07.938079   24564 system_pods.go:59] 19 kube-system pods found
	I0401 19:48:07.938115   24564 system_pods.go:61] "amd-gpu-device-plugin-t788h" [9045ad72-ef2d-4089-86df-edad2333b849] Running
	I0401 19:48:07.938121   24564 system_pods.go:61] "coredns-668d6bf9bc-8jzlj" [d55354bb-992f-43c9-83ee-7d23e99b5325] Running
	I0401 19:48:07.938125   24564 system_pods.go:61] "csi-hostpath-attacher-0" [dcc8708d-0856-43c1-94ba-311112735821] Running
	I0401 19:48:07.938129   24564 system_pods.go:61] "csi-hostpath-resizer-0" [06ae5514-6975-4b51-83e2-3fc72715011c] Running
	I0401 19:48:07.938138   24564 system_pods.go:61] "csi-hostpathplugin-967h9" [572b057a-75cf-417a-9ccb-a13d62050118] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 19:48:07.938147   24564 system_pods.go:61] "etcd-addons-649141" [042cece7-6338-41e3-8a86-d1c859f26ca5] Running
	I0401 19:48:07.938156   24564 system_pods.go:61] "kindnet-6hg88" [44d4c45c-749c-4f05-9718-e7974adde029] Running
	I0401 19:48:07.938163   24564 system_pods.go:61] "kube-apiserver-addons-649141" [5916ce17-6a28-4a34-98e2-345078167ca0] Running
	I0401 19:48:07.938171   24564 system_pods.go:61] "kube-controller-manager-addons-649141" [af067ec4-bbb1-42dc-bde2-51a5b8b27492] Running
	I0401 19:48:07.938180   24564 system_pods.go:61] "kube-ingress-dns-minikube" [9511b5d3-1c2b-4635-acee-fede4b45c9bf] Running
	I0401 19:48:07.938186   24564 system_pods.go:61] "kube-proxy-dm42l" [c78563fb-2b14-49f7-ba1f-af9ea8318f5b] Running
	I0401 19:48:07.938197   24564 system_pods.go:61] "kube-scheduler-addons-649141" [471ae17e-a266-4a5b-99c4-432a15bfbd6f] Running
	I0401 19:48:07.938204   24564 system_pods.go:61] "metrics-server-7fbb699795-x9wfw" [95c30889-c302-41c9-b665-1e72c47e69a3] Running
	I0401 19:48:07.938213   24564 system_pods.go:61] "nvidia-device-plugin-daemonset-cfwld" [4e2876a7-2b87-487b-8684-2742384fe6c7] Running
	I0401 19:48:07.938217   24564 system_pods.go:61] "registry-6c88467877-f5t9p" [4e5173fa-fe3e-4c68-80ae-b807fa653edd] Running
	I0401 19:48:07.938220   24564 system_pods.go:61] "registry-proxy-bpvpg" [f43456fc-f979-4521-9554-0daabc37e1a9] Running
	I0401 19:48:07.938224   24564 system_pods.go:61] "snapshot-controller-68b874b76f-kpnhw" [1c31f450-758f-4fc7-b11d-dcefac0271dc] Running
	I0401 19:48:07.938227   24564 system_pods.go:61] "snapshot-controller-68b874b76f-nxbdz" [4436ad77-fde6-43d4-a572-2a8338542c73] Running
	I0401 19:48:07.938233   24564 system_pods.go:61] "storage-provisioner" [480c344d-5b24-4227-9654-911b764e7378] Running
	I0401 19:48:07.938240   24564 system_pods.go:74] duration metric: took 3.72267529s to wait for pod list to return data ...
	I0401 19:48:07.938256   24564 default_sa.go:34] waiting for default service account to be created ...
	I0401 19:48:07.940369   24564 default_sa.go:45] found service account: "default"
	I0401 19:48:07.940392   24564 default_sa.go:55] duration metric: took 2.124233ms for default service account to be created ...
	I0401 19:48:07.940402   24564 system_pods.go:116] waiting for k8s-apps to be running ...
	I0401 19:48:07.943425   24564 system_pods.go:86] 19 kube-system pods found
	I0401 19:48:07.943454   24564 system_pods.go:89] "amd-gpu-device-plugin-t788h" [9045ad72-ef2d-4089-86df-edad2333b849] Running
	I0401 19:48:07.943462   24564 system_pods.go:89] "coredns-668d6bf9bc-8jzlj" [d55354bb-992f-43c9-83ee-7d23e99b5325] Running
	I0401 19:48:07.943468   24564 system_pods.go:89] "csi-hostpath-attacher-0" [dcc8708d-0856-43c1-94ba-311112735821] Running
	I0401 19:48:07.943473   24564 system_pods.go:89] "csi-hostpath-resizer-0" [06ae5514-6975-4b51-83e2-3fc72715011c] Running
	I0401 19:48:07.943483   24564 system_pods.go:89] "csi-hostpathplugin-967h9" [572b057a-75cf-417a-9ccb-a13d62050118] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0401 19:48:07.943495   24564 system_pods.go:89] "etcd-addons-649141" [042cece7-6338-41e3-8a86-d1c859f26ca5] Running
	I0401 19:48:07.943502   24564 system_pods.go:89] "kindnet-6hg88" [44d4c45c-749c-4f05-9718-e7974adde029] Running
	I0401 19:48:07.943507   24564 system_pods.go:89] "kube-apiserver-addons-649141" [5916ce17-6a28-4a34-98e2-345078167ca0] Running
	I0401 19:48:07.943517   24564 system_pods.go:89] "kube-controller-manager-addons-649141" [af067ec4-bbb1-42dc-bde2-51a5b8b27492] Running
	I0401 19:48:07.943527   24564 system_pods.go:89] "kube-ingress-dns-minikube" [9511b5d3-1c2b-4635-acee-fede4b45c9bf] Running
	I0401 19:48:07.943535   24564 system_pods.go:89] "kube-proxy-dm42l" [c78563fb-2b14-49f7-ba1f-af9ea8318f5b] Running
	I0401 19:48:07.943543   24564 system_pods.go:89] "kube-scheduler-addons-649141" [471ae17e-a266-4a5b-99c4-432a15bfbd6f] Running
	I0401 19:48:07.943551   24564 system_pods.go:89] "metrics-server-7fbb699795-x9wfw" [95c30889-c302-41c9-b665-1e72c47e69a3] Running
	I0401 19:48:07.943560   24564 system_pods.go:89] "nvidia-device-plugin-daemonset-cfwld" [4e2876a7-2b87-487b-8684-2742384fe6c7] Running
	I0401 19:48:07.943568   24564 system_pods.go:89] "registry-6c88467877-f5t9p" [4e5173fa-fe3e-4c68-80ae-b807fa653edd] Running
	I0401 19:48:07.943574   24564 system_pods.go:89] "registry-proxy-bpvpg" [f43456fc-f979-4521-9554-0daabc37e1a9] Running
	I0401 19:48:07.943582   24564 system_pods.go:89] "snapshot-controller-68b874b76f-kpnhw" [1c31f450-758f-4fc7-b11d-dcefac0271dc] Running
	I0401 19:48:07.943587   24564 system_pods.go:89] "snapshot-controller-68b874b76f-nxbdz" [4436ad77-fde6-43d4-a572-2a8338542c73] Running
	I0401 19:48:07.943595   24564 system_pods.go:89] "storage-provisioner" [480c344d-5b24-4227-9654-911b764e7378] Running
	I0401 19:48:07.943611   24564 system_pods.go:126] duration metric: took 3.202683ms to wait for k8s-apps to be running ...
	I0401 19:48:07.943624   24564 system_svc.go:44] waiting for kubelet service to be running ....
	I0401 19:48:07.943692   24564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:48:07.955106   24564 system_svc.go:56] duration metric: took 11.472945ms WaitForService to wait for kubelet
	I0401 19:48:07.955137   24564 kubeadm.go:582] duration metric: took 1m17.867796158s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 19:48:07.955157   24564 node_conditions.go:102] verifying NodePressure condition ...
	I0401 19:48:07.957670   24564 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0401 19:48:07.957695   24564 node_conditions.go:123] node cpu capacity is 8
	I0401 19:48:07.957708   24564 node_conditions.go:105] duration metric: took 2.546274ms to run NodePressure ...
	I0401 19:48:07.957720   24564 start.go:241] waiting for startup goroutines ...
	I0401 19:48:07.961077   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:08.154822   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:08.280459   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:08.461665   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:08.655080   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:08.780774   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:08.961611   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:09.155138   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:09.280970   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:09.463598   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:09.655193   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:09.780982   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:09.962048   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:10.154633   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:10.280081   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:10.460989   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:10.654449   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:10.781058   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:10.961994   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:11.154744   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:11.280462   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:11.461088   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:11.654686   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:11.779865   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:11.961400   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:12.155089   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:12.280510   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:12.461700   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:12.654152   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:12.780657   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:12.961910   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:13.154627   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:13.279833   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:13.461635   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:13.655901   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:13.781143   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0401 19:48:14.022669   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:14.229717   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:14.331337   24564 kapi.go:107] duration metric: took 1m14.053848953s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0401 19:48:14.332957   24564 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-649141 cluster.
	I0401 19:48:14.334335   24564 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0401 19:48:14.335531   24564 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0401 19:48:14.521795   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:14.720031   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:15.020558   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:15.222493   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:15.520736   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:15.720042   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:15.962268   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:16.155150   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:16.461348   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:16.655010   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:16.961608   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:17.155081   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:17.461808   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:17.654435   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:17.961803   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:18.154368   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:18.461572   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:18.655157   24564 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0401 19:48:18.961084   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:19.157383   24564 kapi.go:107] duration metric: took 1m23.005777225s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0401 19:48:19.461067   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:19.961366   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:20.522718   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:20.961101   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:21.462303   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:21.961507   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:22.461702   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:22.960991   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:23.461655   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:23.961013   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:24.461806   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:24.961361   24564 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0401 19:48:25.461883   24564 kapi.go:107] duration metric: took 1m28.003728193s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0401 19:48:25.463477   24564 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0401 19:48:25.464656   24564 addons.go:514] duration metric: took 1m35.377317845s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0401 19:48:25.464699   24564 start.go:246] waiting for cluster config update ...
	I0401 19:48:25.464721   24564 start.go:255] writing updated cluster config ...
	I0401 19:48:25.464962   24564 ssh_runner.go:195] Run: rm -f paused
	I0401 19:48:25.512478   24564 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 19:48:25.514098   24564 out.go:177] * Done! kubectl is now configured to use "addons-649141" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 19:49:45 addons-649141 crio[1044]: time="2025-04-01 19:49:45.662551632Z" level=info msg="Removed pod sandbox: 7352eb8894e4cbf24caa2c85a10c66c34dbe3c6e1cba8571a4833e13b611289f" id=82e12d41-ff1c-4ba7-8053-873a13164bce name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 01 19:49:50 addons-649141 crio[1044]: time="2025-04-01 19:49:50.923606572Z" level=warning msg="Stopping container 4818abeacd43e6c626223099d7c34f0eeb97ca2a086558c4caea535721e81bb4 with stop signal timed out: timeout reached after 30 seconds waiting for container process to exit" id=6d57e507-2380-46c0-b75d-54d35ff247f2 name=/runtime.v1.RuntimeService/StopContainer
	Apr 01 19:49:50 addons-649141 conmon[5585]: conmon 4818abeacd43e6c62622 <ninfo>: container 5597 exited with status 137
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.055754683Z" level=info msg="Stopped container 4818abeacd43e6c626223099d7c34f0eeb97ca2a086558c4caea535721e81bb4: local-path-storage/local-path-provisioner-76f89f99b5-hqdvr/local-path-provisioner" id=6d57e507-2380-46c0-b75d-54d35ff247f2 name=/runtime.v1.RuntimeService/StopContainer
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.056313774Z" level=info msg="Stopping pod sandbox: 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=b0996537-f9df-433f-8fc1-2f40f9bcb118 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.056531884Z" level=info msg="Got pod network &{Name:local-path-provisioner-76f89f99b5-hqdvr Namespace:local-path-storage ID:5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f UID:fe15eb58-f767-4c30-b39e-9e75637a8b95 NetNS:/var/run/netns/64caedb4-072a-42a2-b970-2cab7b15b992 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.056696441Z" level=info msg="Deleting pod local-path-storage_local-path-provisioner-76f89f99b5-hqdvr from CNI network \"kindnet\" (type=ptp)"
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.095318325Z" level=info msg="Stopped pod sandbox: 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=b0996537-f9df-433f-8fc1-2f40f9bcb118 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.452521632Z" level=info msg="Removing container: 4818abeacd43e6c626223099d7c34f0eeb97ca2a086558c4caea535721e81bb4" id=e9e27f6d-c2cb-4337-8a64-5d31dd2126ea name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 01 19:49:51 addons-649141 crio[1044]: time="2025-04-01 19:49:51.466063649Z" level=info msg="Removed container 4818abeacd43e6c626223099d7c34f0eeb97ca2a086558c4caea535721e81bb4: local-path-storage/local-path-provisioner-76f89f99b5-hqdvr/local-path-provisioner" id=e9e27f6d-c2cb-4337-8a64-5d31dd2126ea name=/runtime.v1.RuntimeService/RemoveContainer
	Apr 01 19:50:45 addons-649141 crio[1044]: time="2025-04-01 19:50:45.666007134Z" level=info msg="Stopping pod sandbox: 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=243a9371-122f-4256-bb91-f88b73a26ac7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 01 19:50:45 addons-649141 crio[1044]: time="2025-04-01 19:50:45.666063621Z" level=info msg="Stopped pod sandbox (already stopped): 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=243a9371-122f-4256-bb91-f88b73a26ac7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 01 19:50:45 addons-649141 crio[1044]: time="2025-04-01 19:50:45.666328502Z" level=info msg="Removing pod sandbox: 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=22113adc-e89e-42b3-83bb-6744d15b47bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 01 19:50:45 addons-649141 crio[1044]: time="2025-04-01 19:50:45.672271297Z" level=info msg="Removed pod sandbox: 5cea92735729409bf68b4ec97a9a42fafc59f9fe47268a07a24edd00eac2fe6f" id=22113adc-e89e-42b3-83bb-6744d15b47bf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.136637450Z" level=info msg="Running pod sandbox: default/hello-world-app-7d9564db4-vkvrr/POD" id=89a01e69-b076-4f4c-bc5a-734c021a037a name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.136715820Z" level=warning msg="Allowed annotations are specified for workload []"
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.159871534Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vkvrr Namespace:default ID:fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115 UID:517034fc-8051-4bea-9697-6cd5da9c555f NetNS:/var/run/netns/eb53b384-b2f5-4cfb-bc45-ff3608be105d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.159916258Z" level=info msg="Adding pod default_hello-world-app-7d9564db4-vkvrr to CNI network \"kindnet\" (type=ptp)"
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.173437066Z" level=info msg="Got pod network &{Name:hello-world-app-7d9564db4-vkvrr Namespace:default ID:fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115 UID:517034fc-8051-4bea-9697-6cd5da9c555f NetNS:/var/run/netns/eb53b384-b2f5-4cfb-bc45-ff3608be105d Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.173575253Z" level=info msg="Checking pod default_hello-world-app-7d9564db4-vkvrr for CNI network kindnet (type=ptp)"
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.176587141Z" level=info msg="Ran pod sandbox fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115 with infra container: default/hello-world-app-7d9564db4-vkvrr/POD" id=89a01e69-b076-4f4c-bc5a-734c021a037a name=/runtime.v1.RuntimeService/RunPodSandbox
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.177777879Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=40ae376e-95c7-48e2-bff5-b6822874fa67 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.178072160Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=40ae376e-95c7-48e2-bff5-b6822874fa67 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.178570208Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=a69d6802-3514-42bc-a4bb-e086af2b2c95 name=/runtime.v1.ImageService/PullImage
	Apr 01 19:51:40 addons-649141 crio[1044]: time="2025-04-01 19:51:40.218166572Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7329b458af920       docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                              2 minutes ago       Running             nginx                     0                   cacde2d3e9317       nginx
	dfed329caf137       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   8a25deec94549       busybox
	cd49ae9810848       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   efc082c5a56eb       ingress-nginx-controller-56d7c84fd4-d8mnp
	eb346a9264f40       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     2                   ec033c652933e       ingress-nginx-admission-patch-hpk6m
	154e71198cf1d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   6652afe7be13c       ingress-nginx-admission-create-jk2cv
	757c711003ffb       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   ef421266b094c       kube-ingress-dns-minikube
	c9be28a3ed027       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   038f08186d444       storage-provisioner
	4dc53e7a2ef16       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   d62e2d6ca7d59       coredns-668d6bf9bc-8jzlj
	867b37bbd8fc8       docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495                           4 minutes ago       Running             kindnet-cni               0                   d6e4ac9b4ddf7       kindnet-6hg88
	8f7effaf5b685       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5                                                             4 minutes ago       Running             kube-proxy                0                   189ed7f24a657       kube-proxy-dm42l
	b3e556efe2482       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             5 minutes ago       Running             etcd                      0                   292139103eb79       etcd-addons-649141
	ada3023be8019       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef                                                             5 minutes ago       Running             kube-apiserver            0                   424e9cb574436       kube-apiserver-addons-649141
	3fb84628e14d0       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d                                                             5 minutes ago       Running             kube-scheduler            0                   32ee88d0b181d       kube-scheduler-addons-649141
	66e561d30386d       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389                                                             5 minutes ago       Running             kube-controller-manager   0                   1c9b91938e5af       kube-controller-manager-addons-649141
	
	
	==> coredns [4dc53e7a2ef16d3c05a4e68e6d4d72d004facf679d116ed7bea8af405e462db3] <==
	[INFO] 10.244.0.13:36315 - 9290 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122914s
	[INFO] 10.244.0.13:38048 - 56430 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004452763s
	[INFO] 10.244.0.13:38048 - 56003 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004614356s
	[INFO] 10.244.0.13:46330 - 31357 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004217737s
	[INFO] 10.244.0.13:46330 - 31025 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004688778s
	[INFO] 10.244.0.13:56641 - 33045 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003908849s
	[INFO] 10.244.0.13:56641 - 33290 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005415844s
	[INFO] 10.244.0.13:43165 - 34473 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000097983s
	[INFO] 10.244.0.13:43165 - 34207 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116968s
	[INFO] 10.244.0.21:45347 - 42173 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000217205s
	[INFO] 10.244.0.21:42266 - 19309 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000307232s
	[INFO] 10.244.0.21:48113 - 56276 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133392s
	[INFO] 10.244.0.21:55401 - 58137 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014885s
	[INFO] 10.244.0.21:42368 - 60543 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114516s
	[INFO] 10.244.0.21:38657 - 22659 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016179s
	[INFO] 10.244.0.21:55549 - 14577 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005415405s
	[INFO] 10.244.0.21:35011 - 37359 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005414349s
	[INFO] 10.244.0.21:52173 - 10131 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004776281s
	[INFO] 10.244.0.21:51673 - 13923 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009175593s
	[INFO] 10.244.0.21:55274 - 4830 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005098132s
	[INFO] 10.244.0.21:59183 - 26353 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005149536s
	[INFO] 10.244.0.21:51459 - 34299 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.001871258s
	[INFO] 10.244.0.21:49809 - 49202 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.002008971s
	[INFO] 10.244.0.26:44828 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000183685s
	[INFO] 10.244.0.26:34341 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000143636s
	
	
	==> describe nodes <==
	Name:               addons-649141
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-649141
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=addons-649141
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T19_46_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-649141
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 19:46:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-649141
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 19:51:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 19:49:48 +0000   Tue, 01 Apr 2025 19:46:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 19:49:48 +0000   Tue, 01 Apr 2025 19:46:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 19:49:48 +0000   Tue, 01 Apr 2025 19:46:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Apr 2025 19:49:48 +0000   Tue, 01 Apr 2025 19:47:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-649141
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 be9a62a386d54264a947121a8d2a897f
	  System UUID:                6ca91a62-cf4a-491b-871a-a146b0912c6f
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-world-app-7d9564db4-vkvrr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-d8mnp    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m46s
	  kube-system                 coredns-668d6bf9bc-8jzlj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m50s
	  kube-system                 etcd-addons-649141                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m56s
	  kube-system                 kindnet-6hg88                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m51s
	  kube-system                 kube-apiserver-addons-649141                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-controller-manager-addons-649141        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-dm42l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-649141                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m46s  kube-proxy       
	  Normal   Starting                 4m56s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m56s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m56s  kubelet          Node addons-649141 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m56s  kubelet          Node addons-649141 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m56s  kubelet          Node addons-649141 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m52s  node-controller  Node addons-649141 event: Registered Node addons-649141 in Controller
	  Normal   NodeReady                4m32s  kubelet          Node addons-649141 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000731] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000633] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000636] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000621] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000734] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000606] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.604650] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021993] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.066381] kauditd_printk_skb: 46 callbacks suppressed
	[Apr 1 19:49] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[  +1.012044] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[  +2.015865] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[  +4.067705] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[  +8.187511] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000044] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[ +16.126911] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	[Apr 1 19:50] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 7a 13 8e a5 af 49 82 fb 63 8d 03 29 08 00
	
	
	==> etcd [b3e556efe2482fa3aa4d5df5123bdedbbd84d28f4ab1eca3efebd47800a47aef] <==
	{"level":"info","ts":"2025-04-01T19:46:52.419981Z","caller":"traceutil/trace.go:171","msg":"trace[1501048642] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-6hg88; range_end:; response_count:1; response_revision:351; }","duration":"181.007552ms","start":"2025-04-01T19:46:52.238963Z","end":"2025-04-01T19:46:52.419970Z","steps":["trace[1501048642] 'agreement among raft nodes before linearized reading'  (duration: 180.085292ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:46:52.419113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.452559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:46:52.420175Z","caller":"traceutil/trace.go:171","msg":"trace[325280541] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:351; }","duration":"185.518539ms","start":"2025-04-01T19:46:52.234646Z","end":"2025-04-01T19:46:52.420164Z","steps":["trace[325280541] 'agreement among raft nodes before linearized reading'  (duration: 184.454391ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:52.533963Z","caller":"traceutil/trace.go:171","msg":"trace[471601213] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"102.197846ms","start":"2025-04-01T19:46:52.431744Z","end":"2025-04-01T19:46:52.533942Z","steps":["trace[471601213] 'process raft request'  (duration: 102.112596ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:52.935288Z","caller":"traceutil/trace.go:171","msg":"trace[373103389] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"104.256264ms","start":"2025-04-01T19:46:52.831014Z","end":"2025-04-01T19:46:52.935270Z","steps":["trace[373103389] 'process raft request'  (duration: 104.146286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:46:53.228071Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.741575ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128036297941282421 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" mod_revision:249 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" value_size:150 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-04-01T19:46:53.228463Z","caller":"traceutil/trace.go:171","msg":"trace[1469127865] linearizableReadLoop","detail":"{readStateIndex:377; appliedIndex:373; }","duration":"108.463996ms","start":"2025-04-01T19:46:53.119982Z","end":"2025-04-01T19:46:53.228446Z","steps":["trace[1469127865] 'read index received'  (duration: 4.282113ms)","trace[1469127865] 'applied index is now lower than readState.Index'  (duration: 104.181135ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-01T19:46:53.228589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.589961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-649141\" limit:1 ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2025-04-01T19:46:53.228629Z","caller":"traceutil/trace.go:171","msg":"trace[790146610] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-649141; range_end:; response_count:1; response_revision:368; }","duration":"108.656989ms","start":"2025-04-01T19:46:53.119963Z","end":"2025-04-01T19:46:53.228620Z","steps":["trace[790146610] 'agreement among raft nodes before linearized reading'  (duration: 108.555097ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.228884Z","caller":"traceutil/trace.go:171","msg":"trace[1486300555] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"109.839587ms","start":"2025-04-01T19:46:53.119033Z","end":"2025-04-01T19:46:53.228873Z","steps":["trace[1486300555] 'compare'  (duration: 103.664408ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.229015Z","caller":"traceutil/trace.go:171","msg":"trace[549584813] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"109.08643ms","start":"2025-04-01T19:46:53.119895Z","end":"2025-04-01T19:46:53.228981Z","steps":["trace[549584813] 'process raft request'  (duration: 108.45094ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.229109Z","caller":"traceutil/trace.go:171","msg":"trace[813022767] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"109.030219ms","start":"2025-04-01T19:46:53.120067Z","end":"2025-04-01T19:46:53.229098Z","steps":["trace[813022767] 'process raft request'  (duration: 108.317008ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.229260Z","caller":"traceutil/trace.go:171","msg":"trace[147339877] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"105.984125ms","start":"2025-04-01T19:46:53.123268Z","end":"2025-04-01T19:46:53.229252Z","steps":["trace[147339877] 'process raft request'  (duration: 105.143145ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.229327Z","caller":"traceutil/trace.go:171","msg":"trace[386862911] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"109.547059ms","start":"2025-04-01T19:46:53.119768Z","end":"2025-04-01T19:46:53.229315Z","steps":["trace[386862911] 'process raft request'  (duration: 108.375535ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:46:53.229422Z","caller":"traceutil/trace.go:171","msg":"trace[1675782226] transaction","detail":"{read_only:false; number_of_response:1; response_revision:365; }","duration":"109.577908ms","start":"2025-04-01T19:46:53.119835Z","end":"2025-04-01T19:46:53.229413Z","steps":["trace[1675782226] 'process raft request'  (duration: 108.439314ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:03.085910Z","caller":"traceutil/trace.go:171","msg":"trace[218027257] linearizableReadLoop","detail":"{readStateIndex:1151; appliedIndex:1150; }","duration":"125.763576ms","start":"2025-04-01T19:48:02.960132Z","end":"2025-04-01T19:48:03.085896Z","steps":["trace[218027257] 'read index received'  (duration: 125.605608ms)","trace[218027257] 'applied index is now lower than readState.Index'  (duration: 157.576µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T19:48:03.086017Z","caller":"traceutil/trace.go:171","msg":"trace[278451440] transaction","detail":"{read_only:false; response_revision:1117; number_of_response:1; }","duration":"126.941785ms","start":"2025-04-01T19:48:02.959055Z","end":"2025-04-01T19:48:03.085997Z","steps":["trace[278451440] 'process raft request'  (duration: 126.743048ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T19:48:03.086036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.880875ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-01T19:48:03.086264Z","caller":"traceutil/trace.go:171","msg":"trace[1199100204] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1117; }","duration":"126.132718ms","start":"2025-04-01T19:48:02.960111Z","end":"2025-04-01T19:48:03.086244Z","steps":["trace[1199100204] 'agreement among raft nodes before linearized reading'  (duration: 125.881584ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:30.371067Z","caller":"traceutil/trace.go:171","msg":"trace[436469490] transaction","detail":"{read_only:false; response_revision:1232; number_of_response:1; }","duration":"128.337618ms","start":"2025-04-01T19:48:30.242708Z","end":"2025-04-01T19:48:30.371045Z","steps":["trace[436469490] 'process raft request'  (duration: 80.753521ms)","trace[436469490] 'compare'  (duration: 47.480964ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T19:48:30.384005Z","caller":"traceutil/trace.go:171","msg":"trace[1425174070] transaction","detail":"{read_only:false; response_revision:1235; number_of_response:1; }","duration":"140.984976ms","start":"2025-04-01T19:48:30.243004Z","end":"2025-04-01T19:48:30.383989Z","steps":["trace[1425174070] 'process raft request'  (duration: 140.928689ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:30.384052Z","caller":"traceutil/trace.go:171","msg":"trace[730301330] transaction","detail":"{read_only:false; response_revision:1236; number_of_response:1; }","duration":"139.748753ms","start":"2025-04-01T19:48:30.244281Z","end":"2025-04-01T19:48:30.384030Z","steps":["trace[730301330] 'process raft request'  (duration: 139.672185ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:30.384012Z","caller":"traceutil/trace.go:171","msg":"trace[2044828088] transaction","detail":"{read_only:false; response_revision:1233; number_of_response:1; }","duration":"141.079657ms","start":"2025-04-01T19:48:30.242905Z","end":"2025-04-01T19:48:30.383985Z","steps":["trace[2044828088] 'process raft request'  (duration: 140.912049ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:30.384129Z","caller":"traceutil/trace.go:171","msg":"trace[825491733] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"141.162921ms","start":"2025-04-01T19:48:30.242943Z","end":"2025-04-01T19:48:30.384106Z","steps":["trace[825491733] 'process raft request'  (duration: 140.962846ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T19:48:40.675603Z","caller":"traceutil/trace.go:171","msg":"trace[2016387955] transaction","detail":"{read_only:false; response_revision:1268; number_of_response:1; }","duration":"115.120047ms","start":"2025-04-01T19:48:40.560453Z","end":"2025-04-01T19:48:40.675573Z","steps":["trace[2016387955] 'process raft request'  (duration: 52.250584ms)","trace[2016387955] 'compare'  (duration: 62.754968ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:51:41 up 34 min,  0 users,  load average: 0.26, 0.70, 0.38
	Linux addons-649141 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [867b37bbd8fc82b29187cf4ab440b30ade2849b01dbdd86c4f388b7a5cd8f5da] <==
	I0401 19:49:38.751654       1 main.go:301] handling current node
	I0401 19:49:48.755901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:49:48.755935       1 main.go:301] handling current node
	I0401 19:49:58.750840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:49:58.750877       1 main.go:301] handling current node
	I0401 19:50:08.750885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:08.750919       1 main.go:301] handling current node
	I0401 19:50:18.757810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:18.757841       1 main.go:301] handling current node
	I0401 19:50:28.758210       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:28.758241       1 main.go:301] handling current node
	I0401 19:50:38.753859       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:38.753894       1 main.go:301] handling current node
	I0401 19:50:48.757828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:48.757855       1 main.go:301] handling current node
	I0401 19:50:58.751123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:50:58.751156       1 main.go:301] handling current node
	I0401 19:51:08.755508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:51:08.755540       1 main.go:301] handling current node
	I0401 19:51:18.759639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:51:18.759677       1 main.go:301] handling current node
	I0401 19:51:28.757818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:51:28.757849       1 main.go:301] handling current node
	I0401 19:51:38.750903       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0401 19:51:38.750942       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ada3023be80191c0521074814e6f733b63424faf8388ab61ef058a507ba11299] <==
	I0401 19:47:35.937065       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0401 19:48:38.221283       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57770: use of closed network connection
	E0401 19:48:38.378934       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57790: use of closed network connection
	I0401 19:48:47.345233       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.47.148"}
	I0401 19:49:07.015228       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0401 19:49:16.774416       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0401 19:49:16.976973       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.157.154"}
	I0401 19:49:17.163280       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0401 19:49:18.177729       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0401 19:49:35.823171       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0401 19:49:36.151574       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:49:36.151624       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:49:36.163676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:49:36.163733       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:49:36.164247       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:49:36.164344       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:49:36.173295       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:49:36.173341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:49:36.187118       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0401 19:49:36.187153       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0401 19:49:36.872803       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W0401 19:49:37.165151       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0401 19:49:37.188037       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0401 19:49:37.328103       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0401 19:51:40.032959       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.60.139"}
	
	
	==> kube-controller-manager [66e561d30386d6352a479afa093e1b731c4c38365efe33b086627320e2d0daca] <==
	E0401 19:50:44.701129       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:50:49.315287       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:50:49.316231       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0401 19:50:49.316953       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:50:49.316978       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:50:54.372374       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:50:54.373321       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0401 19:50:54.374095       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:50:54.374127       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:23.094262       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:23.095053       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0401 19:51:23.095839       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:23.095872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:24.284323       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:24.285168       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0401 19:51:24.285978       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:24.286018       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0401 19:51:28.475579       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0401 19:51:28.476382       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0401 19:51:28.477219       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0401 19:51:28.477250       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0401 19:51:39.836118       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="11.595001ms"
	I0401 19:51:39.839718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="3.551839ms"
	I0401 19:51:39.839790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.917µs"
	I0401 19:51:39.844189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="51.649µs"
	
	
	==> kube-proxy [8f7effaf5b685504904c51ab486b2ae707eb561e1c153d890bf9fcbb4df285db] <==
	I0401 19:46:53.722921       1 server_linux.go:66] "Using iptables proxy"
	I0401 19:46:54.328502       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0401 19:46:54.331280       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 19:46:54.625877       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 19:46:54.626027       1 server_linux.go:170] "Using iptables Proxier"
	I0401 19:46:54.629525       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 19:46:54.641643       1 server.go:497] "Version info" version="v1.32.2"
	I0401 19:46:54.641795       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 19:46:54.726186       1 config.go:199] "Starting service config controller"
	I0401 19:46:54.726901       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 19:46:54.726959       1 config.go:329] "Starting node config controller"
	I0401 19:46:54.726999       1 config.go:105] "Starting endpoint slice config controller"
	I0401 19:46:54.727001       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 19:46:54.727010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 19:46:54.928163       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 19:46:54.929282       1 shared_informer.go:320] Caches are synced for node config
	I0401 19:46:54.930369       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [3fb84628e14d04c4214afde72586e1ca11272f8e3cfc037f0c36e7460bde1e07] <==
	E0401 19:46:43.044690       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.044694       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:46:43.044703       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0401 19:46:43.044710       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.044637       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 19:46:43.044733       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.887784       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 19:46:43.887824       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.921251       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 19:46:43.921289       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.973140       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 19:46:43.973216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:43.981414       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 19:46:43.981445       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:44.085439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 19:46:44.085485       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:44.142701       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 19:46:44.142736       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:44.161893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0401 19:46:44.161931       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 19:46:44.161931       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0401 19:46:44.161960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 19:46:44.186165       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 19:46:44.186200       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0401 19:46:44.542587       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 19:50:55 addons-649141 kubelet[1672]: E0401 19:50:55.573329    1672 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537055573078934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:00 addons-649141 kubelet[1672]: I0401 19:51:00.436252    1672 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 01 19:51:05 addons-649141 kubelet[1672]: E0401 19:51:05.575634    1672 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537065575402662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:05 addons-649141 kubelet[1672]: E0401 19:51:05.575667    1672 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537065575402662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:15 addons-649141 kubelet[1672]: E0401 19:51:15.577768    1672 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537075577553451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:15 addons-649141 kubelet[1672]: E0401 19:51:15.577810    1672 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537075577553451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:16 addons-649141 kubelet[1672]: I0401 19:51:16.437069    1672 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-668d6bf9bc-8jzlj" secret="" err="secret \"gcp-auth\" not found"
	Apr 01 19:51:25 addons-649141 kubelet[1672]: E0401 19:51:25.579821    1672 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537085579568391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:25 addons-649141 kubelet[1672]: E0401 19:51:25.579856    1672 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537085579568391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:35 addons-649141 kubelet[1672]: E0401 19:51:35.581861    1672 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537095581644127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:35 addons-649141 kubelet[1672]: E0401 19:51:35.581895    1672 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743537095581644127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:617400,},InodesUsed:&UInt64Value{Value:236,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835177    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="4436ad77-fde6-43d4-a572-2a8338542c73" containerName="volume-snapshot-controller"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835214    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="fe15eb58-f767-4c30-b39e-9e75637a8b95" containerName="local-path-provisioner"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835225    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="hostpath"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835234    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="b008e31a-28fb-4441-8009-d74ff8742b26" containerName="task-pv-container"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835244    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="csi-snapshotter"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835253    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="dcc8708d-0856-43c1-94ba-311112735821" containerName="csi-attacher"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835263    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="1c31f450-758f-4fc7-b11d-dcefac0271dc" containerName="volume-snapshot-controller"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835271    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="csi-external-health-monitor-controller"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835280    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="liveness-probe"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835287    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="06ae5514-6975-4b51-83e2-3fc72715011c" containerName="csi-resizer"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835296    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="node-driver-registrar"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.835304    1672 memory_manager.go:355] "RemoveStaleState removing state" podUID="572b057a-75cf-417a-9ccb-a13d62050118" containerName="csi-provisioner"
	Apr 01 19:51:39 addons-649141 kubelet[1672]: I0401 19:51:39.914357    1672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pqt\" (UniqueName: \"kubernetes.io/projected/517034fc-8051-4bea-9697-6cd5da9c555f-kube-api-access-q7pqt\") pod \"hello-world-app-7d9564db4-vkvrr\" (UID: \"517034fc-8051-4bea-9697-6cd5da9c555f\") " pod="default/hello-world-app-7d9564db4-vkvrr"
	Apr 01 19:51:40 addons-649141 kubelet[1672]: W0401 19:51:40.175566    1672 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/663e65e28bd5a147ea099f502feb9e11716bc2b50727304014f08d3b76d03c72/crio-fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115 WatchSource:0}: Error finding container fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115: Status 404 returned error can't find the container with id fe4d24be211737bf8e8b4de396bbb7c62abc0d403a3cf61c757752ccb163b115
	
	
	==> storage-provisioner [c9be28a3ed027357ea85983d90429ade3a9cd20384df74fe631970e1fe106418] <==
	I0401 19:47:10.039342       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0401 19:47:10.046246       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0401 19:47:10.046290       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0401 19:47:10.052106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0401 19:47:10.052175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99a3429f-e099-492d-a4f6-e13d0cad7ef5", APIVersion:"v1", ResourceVersion:"890", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-649141_d26d75c6-04ce-4afc-8836-5cb531ee47ab became leader
	I0401 19:47:10.052233       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-649141_d26d75c6-04ce-4afc-8836-5cb531ee47ab!
	I0401 19:47:10.152641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-649141_d26d75c6-04ce-4afc-8836-5cb531ee47ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-649141 -n addons-649141
helpers_test.go:261: (dbg) Run:  kubectl --context addons-649141 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-vkvrr ingress-nginx-admission-create-jk2cv ingress-nginx-admission-patch-hpk6m
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-649141 describe pod hello-world-app-7d9564db4-vkvrr ingress-nginx-admission-create-jk2cv ingress-nginx-admission-patch-hpk6m
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-649141 describe pod hello-world-app-7d9564db4-vkvrr ingress-nginx-admission-create-jk2cv ingress-nginx-admission-patch-hpk6m: exit status 1 (62.437349ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-vkvrr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-649141/192.168.49.2
	Start Time:       Tue, 01 Apr 2025 19:51:39 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q7pqt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-q7pqt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-vkvrr to addons-649141
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jk2cv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-hpk6m" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-649141 describe pod hello-world-app-7d9564db4-vkvrr ingress-nginx-admission-create-jk2cv ingress-nginx-admission-patch-hpk6m: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable ingress-dns --alsologtostderr -v=1: (1.509747193s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable ingress --alsologtostderr -v=1: (7.616067454s)
--- FAIL: TestAddons/parallel/Ingress (154.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (298.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 80 (4m56.722247122s)

                                                
                                                
-- stdout --
	* [old-k8s-version-964633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "old-k8s-version-964633" primary control-plane node in "old-k8s-version-964633" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:25:46.868961  318306 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:25:46.869282  318306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:25:46.869309  318306 out.go:358] Setting ErrFile to fd 2...
	I0401 20:25:46.869321  318306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:25:46.869541  318306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:25:46.870342  318306 out.go:352] Setting JSON to false
	I0401 20:25:46.871973  318306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4093,"bootTime":1743535054,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:25:46.872100  318306 start.go:139] virtualization: kvm guest
	I0401 20:25:46.874104  318306 out.go:177] * [old-k8s-version-964633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:25:46.875548  318306 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:25:46.875574  318306 notify.go:220] Checking for updates...
	I0401 20:25:46.877915  318306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:25:46.879036  318306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:25:46.880352  318306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:25:46.881870  318306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:25:46.883147  318306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:25:46.885164  318306 config.go:182] Loaded profile config "bridge-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:46.885318  318306 config.go:182] Loaded profile config "flannel-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:46.885430  318306 config.go:182] Loaded profile config "kubernetes-upgrade-337773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:46.885545  318306 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:25:46.916063  318306 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:25:46.916172  318306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:25:46.973726  318306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-01 20:25:46.963213114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:25:46.973912  318306 docker.go:318] overlay module found
	I0401 20:25:46.975739  318306 out.go:177] * Using the docker driver based on user configuration
	I0401 20:25:46.976896  318306 start.go:297] selected driver: docker
	I0401 20:25:46.976911  318306 start.go:901] validating driver "docker" against <nil>
	I0401 20:25:46.976922  318306 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:25:46.977881  318306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:25:47.034208  318306 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-01 20:25:47.023327431 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:25:47.034401  318306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:25:47.034669  318306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:25:47.036569  318306 out.go:177] * Using Docker driver with root privileges
	I0401 20:25:47.037710  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:25:47.037813  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:25:47.037833  318306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:25:47.037921  318306 start.go:340] cluster config:
	{Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:25:47.039215  318306 out.go:177] * Starting "old-k8s-version-964633" primary control-plane node in "old-k8s-version-964633" cluster
	I0401 20:25:47.040285  318306 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:25:47.041350  318306 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:25:47.042617  318306 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:25:47.042673  318306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 20:25:47.042686  318306 cache.go:56] Caching tarball of preloaded images
	I0401 20:25:47.042744  318306 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:25:47.042824  318306 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:25:47.042848  318306 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 20:25:47.042978  318306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:25:47.043004  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json: {Name:mkb41ce499848d37d634cb747175ed10985e5c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:25:47.066723  318306 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:25:47.066749  318306 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:25:47.066770  318306 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:25:47.066811  318306 start.go:360] acquireMachinesLock for old-k8s-version-964633: {Name:mkcf81b33459cdbb9c109c2df72357b4097207d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:47.066930  318306 start.go:364] duration metric: took 99.604µs to acquireMachinesLock for "old-k8s-version-964633"
	I0401 20:25:47.066961  318306 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:25:47.067064  318306 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:25:47.069077  318306 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:25:47.069340  318306 start.go:159] libmachine.API.Create for "old-k8s-version-964633" (driver="docker")
	I0401 20:25:47.069375  318306 client.go:168] LocalClient.Create starting
	I0401 20:25:47.069465  318306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:25:47.069514  318306 main.go:141] libmachine: Decoding PEM data...
	I0401 20:25:47.069541  318306 main.go:141] libmachine: Parsing certificate...
	I0401 20:25:47.069624  318306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:25:47.069659  318306 main.go:141] libmachine: Decoding PEM data...
	I0401 20:25:47.069687  318306 main.go:141] libmachine: Parsing certificate...
	I0401 20:25:47.070132  318306 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:25:47.090560  318306 cli_runner.go:211] docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:25:47.090646  318306 network_create.go:284] running [docker network inspect old-k8s-version-964633] to gather additional debugging logs...
	I0401 20:25:47.090669  318306 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633
	W0401 20:25:47.111410  318306 cli_runner.go:211] docker network inspect old-k8s-version-964633 returned with exit code 1
	I0401 20:25:47.111448  318306 network_create.go:287] error running [docker network inspect old-k8s-version-964633]: docker network inspect old-k8s-version-964633: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-964633 not found
	I0401 20:25:47.111463  318306 network_create.go:289] output of [docker network inspect old-k8s-version-964633]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-964633 not found
	
	** /stderr **
	I0401 20:25:47.111607  318306 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:25:47.134093  318306 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:25:47.135214  318306 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:25:47.136406  318306 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:25:47.137165  318306 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-54795f4c4e71 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:29:cf:68:90:0b} reservation:<nil>}
	I0401 20:25:47.138407  318306 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe1250}
	I0401 20:25:47.138447  318306 network_create.go:124] attempt to create docker network old-k8s-version-964633 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0401 20:25:47.138516  318306 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-964633 old-k8s-version-964633
	I0401 20:25:47.201184  318306 network_create.go:108] docker network old-k8s-version-964633 192.168.85.0/24 created
	I0401 20:25:47.201251  318306 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-964633" container
	I0401 20:25:47.201326  318306 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:25:47.222911  318306 cli_runner.go:164] Run: docker volume create old-k8s-version-964633 --label name.minikube.sigs.k8s.io=old-k8s-version-964633 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:25:47.243317  318306 oci.go:103] Successfully created a docker volume old-k8s-version-964633
	I0401 20:25:47.243406  318306 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-964633-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-964633 --entrypoint /usr/bin/test -v old-k8s-version-964633:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:25:47.925959  318306 oci.go:107] Successfully prepared a docker volume old-k8s-version-964633
	I0401 20:25:47.925994  318306 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:25:47.926012  318306 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:25:47.926064  318306 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-964633:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:25:51.479311  318306 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-964633:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (3.553199428s)
	I0401 20:25:51.479347  318306 kic.go:203] duration metric: took 3.553329888s to extract preloaded images to volume ...
	W0401 20:25:51.479496  318306 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:25:51.479599  318306 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:25:51.538364  318306 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-964633 --name old-k8s-version-964633 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-964633 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-964633 --network old-k8s-version-964633 --ip 192.168.85.2 --volume old-k8s-version-964633:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:25:51.914123  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Running}}
	I0401 20:25:51.933509  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:25:51.954003  318306 cli_runner.go:164] Run: docker exec old-k8s-version-964633 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:25:52.007465  318306 oci.go:144] the created container "old-k8s-version-964633" has a running status.
	I0401 20:25:52.007506  318306 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa...
	I0401 20:25:52.641691  318306 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:25:52.665483  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:25:52.685896  318306 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:25:52.685923  318306 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-964633 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:25:52.731004  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:25:52.753165  318306 machine.go:93] provisionDockerMachine start ...
	I0401 20:25:52.753236  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:52.776340  318306 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:52.776675  318306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0401 20:25:52.776696  318306 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:25:52.913502  318306 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:25:52.913544  318306 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:25:52.913610  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:52.933307  318306 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:52.933507  318306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0401 20:25:52.933537  318306 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:25:53.084568  318306 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:25:53.084651  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:53.106281  318306 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:53.106550  318306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0401 20:25:53.106575  318306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:25:53.258421  318306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:25:53.258451  318306 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:25:53.258502  318306 ubuntu.go:177] setting up certificates
	I0401 20:25:53.258517  318306 provision.go:84] configureAuth start
	I0401 20:25:53.258571  318306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:25:53.281400  318306 provision.go:143] copyHostCerts
	I0401 20:25:53.281482  318306 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:25:53.281493  318306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:25:53.281570  318306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:25:53.281684  318306 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:25:53.281703  318306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:25:53.281788  318306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:25:53.281957  318306 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:25:53.281972  318306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:25:53.282011  318306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:25:53.282087  318306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:25:53.615409  318306 provision.go:177] copyRemoteCerts
	I0401 20:25:53.615524  318306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:25:53.615588  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:53.641992  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:25:53.747397  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:25:53.782377  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:25:53.818803  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:25:53.874023  318306 provision.go:87] duration metric: took 615.492089ms to configureAuth
	I0401 20:25:53.874054  318306 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:25:53.874235  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:25:53.874364  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:53.897584  318306 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:53.897869  318306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0401 20:25:53.897893  318306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:25:54.204857  318306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:25:54.204889  318306 machine.go:96] duration metric: took 1.45170283s to provisionDockerMachine
	I0401 20:25:54.204902  318306 client.go:171] duration metric: took 7.135519249s to LocalClient.Create
	I0401 20:25:54.204925  318306 start.go:167] duration metric: took 7.135585322s to libmachine.API.Create "old-k8s-version-964633"
	I0401 20:25:54.204935  318306 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:25:54.204948  318306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:25:54.205015  318306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:25:54.205060  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:54.226446  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:25:54.339195  318306 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:25:54.343892  318306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:25:54.343934  318306 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:25:54.343946  318306 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:25:54.343954  318306 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:25:54.343965  318306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:25:54.344027  318306 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:25:54.344139  318306 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:25:54.344266  318306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:25:54.353849  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:25:54.395190  318306 start.go:296] duration metric: took 190.240724ms for postStartSetup
	I0401 20:25:54.395616  318306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:25:54.415533  318306 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:25:54.415765  318306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:25:54.415798  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:54.459355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:25:54.561329  318306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:25:54.566966  318306 start.go:128] duration metric: took 7.499885209s to createHost
	I0401 20:25:54.566990  318306 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 7.500046336s
	I0401 20:25:54.567055  318306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:25:54.589620  318306 ssh_runner.go:195] Run: cat /version.json
	I0401 20:25:54.589683  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:54.589831  318306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:25:54.589927  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:25:54.617148  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:25:54.619127  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:25:54.809087  318306 ssh_runner.go:195] Run: systemctl --version
	I0401 20:25:54.814501  318306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:25:54.965066  318306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:25:54.971071  318306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:25:54.993325  318306 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:25:54.993396  318306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:25:55.029677  318306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:25:55.029699  318306 start.go:495] detecting cgroup driver to use...
	I0401 20:25:55.029736  318306 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:25:55.029819  318306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:25:55.046597  318306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:25:55.060877  318306 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:25:55.060926  318306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:25:55.075695  318306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:25:55.094020  318306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:25:55.199275  318306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:25:55.309864  318306 docker.go:233] disabling docker service ...
	I0401 20:25:55.309931  318306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:25:55.342052  318306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:25:55.356875  318306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:25:55.464654  318306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:25:55.562715  318306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:25:55.575265  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:25:55.592047  318306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:25:55.592099  318306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:55.602306  318306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:25:55.602371  318306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:55.613062  318306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:55.624247  318306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:55.634727  318306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:25:55.644502  318306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:25:55.653897  318306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:25:55.664593  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:25:55.753561  318306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:25:56.072365  318306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:25:56.072442  318306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:25:56.076963  318306 start.go:563] Will wait 60s for crictl version
	I0401 20:25:56.077029  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:25:56.081144  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:25:56.121586  318306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:25:56.121710  318306 ssh_runner.go:195] Run: crio --version
	I0401 20:25:56.164793  318306 ssh_runner.go:195] Run: crio --version
	I0401 20:25:56.210746  318306 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:25:56.211937  318306 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:25:56.231325  318306 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:25:56.235503  318306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:25:56.246920  318306 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:25:56.247046  318306 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:25:56.247107  318306 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:25:56.310159  318306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:25:56.310244  318306 ssh_runner.go:195] Run: which lz4
	I0401 20:25:56.314491  318306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:25:56.318031  318306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:25:56.318093  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:25:57.522133  318306 crio.go:462] duration metric: took 1.207678915s to copy over tarball
	I0401 20:25:57.522197  318306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:26:01.262109  318306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.739881399s)
	I0401 20:26:01.262141  318306 crio.go:469] duration metric: took 3.739980892s to extract the tarball
	I0401 20:26:01.262150  318306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:26:01.346404  318306 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:01.392938  318306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:26:01.392960  318306 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:26:01.393024  318306 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:01.393208  318306 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:01.393295  318306 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:01.393364  318306 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:01.393428  318306 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:01.393506  318306 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:26:01.393586  318306 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.393651  318306 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:26:01.397429  318306 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:01.398604  318306 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:26:01.398624  318306 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:01.398637  318306 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:01.398633  318306 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:01.398678  318306 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:01.398624  318306 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:26:01.398726  318306 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.593174  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.623784  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:01.624051  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:26:01.669865  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:01.670928  318306 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:26:01.670963  318306 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.670998  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.672117  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:01.766128  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:26:01.769577  318306 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:26:01.769622  318306 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:26:01.769671  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.769768  318306 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:26:01.769805  318306 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:01.769839  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.769909  318306 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:26:01.769937  318306 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:01.769962  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.770024  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.770080  318306 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:26:01.770096  318306 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:01.770121  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.770961  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:01.855908  318306 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:26:01.855951  318306 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:26:01.855999  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.870093  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:01.870116  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:01.870133  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:01.870173  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:26:01.870194  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:01.877649  318306 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:26:01.877692  318306 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:01.877760  318306 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.877839  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:26:02.051702  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:26:02.051817  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:02.051885  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:02.051947  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:02.051997  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:26:02.062679  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:26:02.062744  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:02.273804  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:26:02.273945  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:26:02.273991  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:26:02.274033  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:26:02.274071  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:26:02.274113  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:02.274152  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:26:02.452934  318306 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:26:02.453057  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:26:02.453120  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:26:02.453171  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:26:02.453229  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:26:02.453279  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:26:02.510483  318306 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:26:02.583862  318306 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:02.748489  318306 cache_images.go:92] duration metric: took 1.355511786s to LoadCachedImages
	W0401 20:26:02.748573  318306 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0401 20:26:02.748585  318306 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:26:02.748686  318306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:02.748780  318306 ssh_runner.go:195] Run: crio config
	I0401 20:26:02.849548  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:02.849571  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:02.849583  318306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:02.849607  318306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:26:02.850140  318306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:02.850219  318306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:26:02.865078  318306 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:02.865163  318306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:02.882272  318306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:26:02.905086  318306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:02.927769  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:26:02.948596  318306 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:02.953796  318306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:02.968323  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:03.078699  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:03.101534  318306 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:26:03.101565  318306 certs.go:194] generating shared ca certs ...
	I0401 20:26:03.101587  318306 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:03.101835  318306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:03.101896  318306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:03.101906  318306 certs.go:256] generating profile certs ...
	I0401 20:26:03.101984  318306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:26:03.102003  318306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.crt with IP's: []
	I0401 20:26:03.572399  318306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.crt ...
	I0401 20:26:03.572427  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.crt: {Name:mkc20cce4a0884d62bf870044c4c9a7efad88228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:03.572608  318306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key ...
	I0401 20:26:03.572628  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key: {Name:mka283b2e7e42fee2fafad8cc1594a8fd919db4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:03.572745  318306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:26:03.572769  318306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt.4d8a9adb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0401 20:26:03.851325  318306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt.4d8a9adb ...
	I0401 20:26:03.851360  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt.4d8a9adb: {Name:mkac69ae9b976c12286617a4b4474554bcb8e893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:03.851593  318306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb ...
	I0401 20:26:03.851614  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb: {Name:mk28216c6f49ef6faecf1e696ba063e0b7862608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:03.851743  318306 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt.4d8a9adb -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt
	I0401 20:26:03.851840  318306 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key
	I0401 20:26:03.851919  318306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:26:03.851945  318306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt with IP's: []
	I0401 20:26:04.162953  318306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt ...
	I0401 20:26:04.162984  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt: {Name:mkd1dd4963ede83ed3f6c579f55c700bb427c976 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:04.163154  318306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key ...
	I0401 20:26:04.163172  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key: {Name:mkc0d9879a41d0f866e3a040461dec3555e322a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:04.163414  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:04.163470  318306 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:04.163482  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:04.163504  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:04.163529  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:04.163549  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:04.163584  318306 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:04.164210  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:04.194183  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:04.221537  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:04.250424  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:04.281460  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:26:04.319634  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:04.354718  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:04.387916  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:04.421729  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:04.452771  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:04.487848  318306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:04.517602  318306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:04.536069  318306 ssh_runner.go:195] Run: openssl version
	I0401 20:26:04.542406  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:04.554470  318306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:04.558885  318306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:04.558944  318306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:04.566739  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:04.576277  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:04.586045  318306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:04.591030  318306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:04.591096  318306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:04.599545  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:04.612828  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:04.623327  318306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:04.627672  318306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:04.627725  318306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:04.634941  318306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:04.646552  318306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:04.650551  318306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:04.650614  318306 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:04.650696  318306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:04.650739  318306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:04.715533  318306 cri.go:89] found id: ""
	I0401 20:26:04.715612  318306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:04.727157  318306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:04.737898  318306 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:04.737966  318306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:04.747781  318306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:04.747822  318306 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:04.747873  318306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:04.759866  318306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:04.759911  318306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:04.770127  318306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:04.780017  318306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:04.780079  318306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:04.791126  318306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:04.804509  318306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:04.804571  318306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:04.814989  318306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:04.828539  318306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:04.828611  318306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:04.839458  318306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:04.973534  318306 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:05.091493  318306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	* 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:51.595131743Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f156c5de777c528d6f9375314eb0d4cbc858057b93c8250916b99a0c025d2197",
	            "SandboxKey": "/var/run/docker/netns/f156c5de777c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:e3:3a:a8:12:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "243297cc045b5d60c15285cd09a136adfdf271f0421c51d1725f61e9cf50e39f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.138763084s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 01 20:27:15 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:27:15.619846848Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a8efaee6-fed8-4d46-8cdb-712c88b1cedb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:27:27 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:27:27.554841296Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=168e3c22-6d61-4263-bfd1-7e4c244dff47 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:27:27 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:27:27.555145721Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=168e3c22-6d61-4263-bfd1-7e4c244dff47 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:27:27 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:27:27.555617513Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2828cb2e-023b-4220-8500-3f3285999904 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:27:27 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:27:27.580362033Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:28:10 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:10.554753441Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0419f232-fe5d-4f82-b11a-e8e2ca33be04 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:28:10 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:10.555132519Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0419f232-fe5d-4f82-b11a-e8e2ca33be04 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:28:24 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:24.554701748Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5fd8353a-24a8-419d-a5ee-4a03251ab4ed name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:28:24 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:24.554990267Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5fd8353a-24a8-419d-a5ee-4a03251ab4ed name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:28:24 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:24.555577402Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e7bad9db-52d6-4d16-80f7-9b8450e40262 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:28:24 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:28:24.556972507Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:07 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:07.554706397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2ba0296c-5e60-439b-908e-125303d54087 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:07 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:07.554999310Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2ba0296c-5e60-439b-908e-125303d54087 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:21 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:21.554586824Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2454b06e-ed82-4386-a372-f29562f6be87 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:21 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:21.554864514Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2454b06e-ed82-4386-a372-f29562f6be87 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:32 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:32.554678493Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=86d5f108-2c6a-41a5-b69e-f682c7d5405a name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:32 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:32.554893407Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=86d5f108-2c6a-41a5-b69e-f682c7d5405a name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:46 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:46.554820264Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=867caf98-bdaa-4b6f-964c-68df0aed3494 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:46 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:46.555106289Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=867caf98-bdaa-4b6f-964c-68df0aed3494 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:29:46 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:46.555793257Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7532c3df-5dce-4192-9f67-aaaa30f899a6 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:29:46 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:29:46.566117010Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:30:32 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:30:32.554663792Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cc3b73f1-74eb-4e9a-93bb-542f69fac4c8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:30:32 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:30:32.554961851Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cc3b73f1-74eb-4e9a-93bb-542f69fac4c8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:30:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:30:43.554648575Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=23c81cd6-cedd-464f-a77d-6aaffef18f39 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:30:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:30:43.554941339Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=23c81cd6-cedd-464f-a77d-6aaffef18f39 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b18de8419e15       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   4 minutes ago       Running             kube-proxy                0                   45b225c010954       kube-proxy-vb8ks
	4384af78a1883       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   4 minutes ago       Running             kube-controller-manager   0                   7e4cef1969b72       kube-controller-manager-old-k8s-version-964633
	9513e7ad765e4       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   4 minutes ago       Running             etcd                      0                   aabb404aa7c03       etcd-old-k8s-version-964633
	f2526055eea0e       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   4 minutes ago       Running             kube-scheduler            0                   0a05fd341a521       kube-scheduler-old-k8s-version-964633
	2064fb7c665fb       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   4 minutes ago       Running             kube-apiserver            0                   b311a7ae56993       kube-apiserver-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:30:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:26:41 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:26:41 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:26:41 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:26:41 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 de7c8d50f85047d185c1ae1aa27193dd
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m13s
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m2s
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m30s (x5 over 4m30s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x5 over 4m30s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x5 over 4m30s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m13s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s                  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s                  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s                  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [9513e7ad765e4b69c4cbbfbd6cb33f21a3a48b715bdea7a1ff49cc1566bcc760] <==
	2025-04-01 20:26:35.601706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:26:45.601736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:26:55.601700 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:05.601725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:15.601835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:25.601760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:35.601788 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:45.601737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:27:55.601837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:05.601793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:15.601826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:25.601685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:35.601952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:45.601732 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:28:55.601666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:05.601871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:15.601839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:25.601802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:35.601762 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:45.601855 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:29:55.601828 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:30:05.601820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:30:15.601802 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:30:25.601737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:30:35.603071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:30:44 up  1:13,  0 users,  load average: 0.33, 2.72, 2.48
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2064fb7c665fb767c07a50e206db452bfd0e93dc10750dd7ecf94bfe4beb0cc4] <==
	I0401 20:26:26.199202       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:31.519672       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:42.222977       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0401 20:26:42.389064       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:27:00.489800       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:27:00.489847       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:27:00.489856       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:27:31.963520       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:27:31.963564       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:27:31.963572       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:28:09.786998       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:28:09.787050       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:28:09.787058       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:28:48.574878       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:28:48.574931       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:28:48.574942       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:29:28.626747       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:29:28.626797       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:29:28.626805       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:29:59.334439       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:29:59.334485       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:29:59.334493       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:30:35.802535       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:30:35.802577       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:30:35.802585       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [4384af78a188378e4c730aadae8ad08f38d60dd777008b0a8138a2838ea2ab7f] <==
	I0401 20:26:42.217841       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0401 20:26:42.217905       1 shared_informer.go:247] Caches are synced for job 
	I0401 20:26:42.218052       1 shared_informer.go:247] Caches are synced for attach detach 
	I0401 20:26:42.218327       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0401 20:26:42.218385       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0401 20:26:42.218730       1 shared_informer.go:247] Caches are synced for deployment 
	I0401 20:26:42.219644       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0401 20:26:42.222868       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0401 20:26:42.228067       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0401 20:26:42.229898       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0401 20:26:42.242716       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8m52n"
	I0401 20:26:42.255473       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5bjk4"
	I0401 20:26:42.271135       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0401 20:26:42.377788       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0401 20:26:42.379364       1 shared_informer.go:247] Caches are synced for stateful set 
	I0401 20:26:42.400582       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vb8ks"
	I0401 20:26:42.400651       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.426096       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.434446       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rmrss"
	I0401 20:26:42.566911       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0401 20:26:42.917995       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:42.918028       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0401 20:26:42.918408       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:43.539217       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0401 20:26:43.546242       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-8m52n"
	
	
	==> kube-proxy [7b18de8419e1524ddac8727fd7e9261582448e897f548b26ad3311e27cf0e6fb] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [f2526055eea0e40e9b5009904a748c68af694b09fbeb58de9177b4b5f55ffcea] <==
	E0401 20:26:22.050850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:22.050959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:22.051031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:22.051104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051131       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:22.051235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:22.051280       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:22.051338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:22.051403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:29:21 old-k8s-version-964633 kubelet[2076]: E0401 20:29:21.555181    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:21 old-k8s-version-964633 kubelet[2076]: E0401 20:29:21.671821    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:26 old-k8s-version-964633 kubelet[2076]: E0401 20:29:26.672548    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:31 old-k8s-version-964633 kubelet[2076]: E0401 20:29:31.673192    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:32 old-k8s-version-964633 kubelet[2076]: E0401 20:29:32.555150    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:36 old-k8s-version-964633 kubelet[2076]: E0401 20:29:36.673863    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:41 old-k8s-version-964633 kubelet[2076]: E0401 20:29:41.674535    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:46 old-k8s-version-964633 kubelet[2076]: E0401 20:29:46.675179    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:51 old-k8s-version-964633 kubelet[2076]: E0401 20:29:51.675844    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:29:56 old-k8s-version-964633 kubelet[2076]: E0401 20:29:56.676602    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:01 old-k8s-version-964633 kubelet[2076]: E0401 20:30:01.677329    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:06 old-k8s-version-964633 kubelet[2076]: E0401 20:30:06.678016    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:11 old-k8s-version-964633 kubelet[2076]: E0401 20:30:11.678714    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:16 old-k8s-version-964633 kubelet[2076]: E0401 20:30:16.679300    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:18 old-k8s-version-964633 kubelet[2076]: E0401 20:30:18.357521    2076 remote_image.go:113] PullImage "docker.io/kindest/kindnetd:v20250214-acbabc1a" from image service failed: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:30:18 old-k8s-version-964633 kubelet[2076]: E0401 20:30:18.357584    2076 kuberuntime_image.go:51] Pull image "docker.io/kindest/kindnetd:v20250214-acbabc1a" failed: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:30:18 old-k8s-version-964633 kubelet[2076]: E0401 20:30:18.357742    2076 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountP
ath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-pbwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod ki
ndnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878): ErrImagePull: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:30:18 old-k8s-version-964633 kubelet[2076]: E0401 20:30:18.357814    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 01 20:30:21 old-k8s-version-964633 kubelet[2076]: E0401 20:30:21.679919    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:26 old-k8s-version-964633 kubelet[2076]: E0401 20:30:26.680585    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:31 old-k8s-version-964633 kubelet[2076]: E0401 20:30:31.681294    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:32 old-k8s-version-964633 kubelet[2076]: E0401 20:30:32.555254    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:30:36 old-k8s-version-964633 kubelet[2076]: E0401 20:30:36.681920    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:41 old-k8s-version-964633 kubelet[2076]: E0401 20:30:41.682576    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:30:43 old-k8s-version-964633 kubelet[2076]: E0401 20:30:43.555139    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1 (72.534176ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (298.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (287.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m45.750800611s)

                                                
                                                
-- stdout --
	* [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:25:52.747868  320217 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:25:52.748189  320217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:25:52.748205  320217 out.go:358] Setting ErrFile to fd 2...
	I0401 20:25:52.748212  320217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:25:52.748523  320217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:25:52.749230  320217 out.go:352] Setting JSON to false
	I0401 20:25:52.750628  320217 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4099,"bootTime":1743535054,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:25:52.750725  320217 start.go:139] virtualization: kvm guest
	I0401 20:25:52.752780  320217 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:25:52.754175  320217 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:25:52.754199  320217 notify.go:220] Checking for updates...
	I0401 20:25:52.756626  320217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:25:52.757850  320217 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:25:52.758987  320217 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:25:52.760151  320217 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:25:52.761184  320217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:25:52.762837  320217 config.go:182] Loaded profile config "bridge-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:52.762974  320217 config.go:182] Loaded profile config "flannel-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:52.763125  320217 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:25:52.763225  320217 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:25:52.792387  320217 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:25:52.792477  320217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:25:52.844445  320217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-04-01 20:25:52.834826709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:25:52.844539  320217 docker.go:318] overlay module found
	I0401 20:25:52.846353  320217 out.go:177] * Using the docker driver based on user configuration
	I0401 20:25:52.847595  320217 start.go:297] selected driver: docker
	I0401 20:25:52.847611  320217 start.go:901] validating driver "docker" against <nil>
	I0401 20:25:52.847639  320217 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:25:52.848499  320217 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:25:52.898539  320217 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:75 SystemTime:2025-04-01 20:25:52.889195826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:25:52.898686  320217 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:25:52.898876  320217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:25:52.900749  320217 out.go:177] * Using Docker driver with root privileges
	I0401 20:25:52.902230  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:25:52.902289  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:25:52.902300  320217 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:25:52.902355  320217 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:25:52.903835  320217 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:25:52.905114  320217 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:25:52.906523  320217 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:25:52.907670  320217 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:25:52.907760  320217 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:25:52.907753  320217 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:25:52.907791  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json: {Name:mkaddeccd9c5c16fe06a37f4ac1594cf091949f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:25:52.907919  320217 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.907941  320217 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.907978  320217 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908001  320217 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908010  320217 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908048  320217 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:25:52.908059  320217 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 130.021µs
	I0401 20:25:52.908073  320217 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:25:52.908029  320217 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908091  320217 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0401 20:25:52.908088  320217 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908082  320217 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.908131  320217 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0401 20:25:52.908245  320217 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:25:52.908276  320217 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:25:52.908294  320217 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:25:52.908297  320217 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:25:52.908245  320217 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:25:52.909144  320217 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0401 20:25:52.909326  320217 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:25:52.909367  320217 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:25:52.909521  320217 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:25:52.909546  320217 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:25:52.909871  320217 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:25:52.909981  320217 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0401 20:25:52.930823  320217 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:25:52.930847  320217 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:25:52.930866  320217 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:25:52.930922  320217 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:25:52.931034  320217 start.go:364] duration metric: took 89.163µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:25:52.931072  320217 start.go:93] Provisioning new machine with config: &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:25:52.931162  320217 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:25:52.933239  320217 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:25:52.933455  320217 start.go:159] libmachine.API.Create for "no-preload-671514" (driver="docker")
	I0401 20:25:52.933482  320217 client.go:168] LocalClient.Create starting
	I0401 20:25:52.933532  320217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:25:52.933559  320217 main.go:141] libmachine: Decoding PEM data...
	I0401 20:25:52.933569  320217 main.go:141] libmachine: Parsing certificate...
	I0401 20:25:52.933608  320217 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:25:52.933627  320217 main.go:141] libmachine: Decoding PEM data...
	I0401 20:25:52.933638  320217 main.go:141] libmachine: Parsing certificate...
	I0401 20:25:52.934155  320217 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:25:52.953826  320217 cli_runner.go:211] docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:25:52.953883  320217 network_create.go:284] running [docker network inspect no-preload-671514] to gather additional debugging logs...
	I0401 20:25:52.953898  320217 cli_runner.go:164] Run: docker network inspect no-preload-671514
	W0401 20:25:52.970598  320217 cli_runner.go:211] docker network inspect no-preload-671514 returned with exit code 1
	I0401 20:25:52.970627  320217 network_create.go:287] error running [docker network inspect no-preload-671514]: docker network inspect no-preload-671514: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-671514 not found
	I0401 20:25:52.970643  320217 network_create.go:289] output of [docker network inspect no-preload-671514]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-671514 not found
	
	** /stderr **
	I0401 20:25:52.970727  320217 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:25:52.989735  320217 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:25:52.990524  320217 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:25:52.991332  320217 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:25:52.992204  320217 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff9470}
	I0401 20:25:52.992227  320217 network_create.go:124] attempt to create docker network no-preload-671514 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0401 20:25:52.992269  320217 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-671514 no-preload-671514
	I0401 20:25:53.043780  320217 network_create.go:108] docker network no-preload-671514 192.168.76.0/24 created
	I0401 20:25:53.043812  320217 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-671514" container
	I0401 20:25:53.043883  320217 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:25:53.063648  320217 cli_runner.go:164] Run: docker volume create no-preload-671514 --label name.minikube.sigs.k8s.io=no-preload-671514 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:25:53.084856  320217 oci.go:103] Successfully created a docker volume no-preload-671514
	I0401 20:25:53.084911  320217 cli_runner.go:164] Run: docker run --rm --name no-preload-671514-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-671514 --entrypoint /usr/bin/test -v no-preload-671514:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:25:53.102500  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0401 20:25:53.142537  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0401 20:25:53.147465  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0401 20:25:53.180524  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:25:53.180550  320217 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 272.550667ms
	I0401 20:25:53.180563  320217 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:25:53.181061  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0401 20:25:53.223293  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0401 20:25:53.256710  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0401 20:25:53.271951  320217 cache.go:162] opening:  /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0401 20:25:53.599015  320217 oci.go:107] Successfully prepared a docker volume no-preload-671514
	I0401 20:25:53.599055  320217 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	W0401 20:25:53.599193  320217 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:25:53.599342  320217 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:25:53.667438  320217 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-671514 --name no-preload-671514 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-671514 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-671514 --network no-preload-671514 --ip 192.168.76.2 --volume no-preload-671514:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:25:53.892209  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:25:53.892239  320217 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 984.262372ms
	I0401 20:25:53.892253  320217 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:25:53.996616  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Running}}
	I0401 20:25:54.022004  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:25:54.045595  320217 cli_runner.go:164] Run: docker exec no-preload-671514 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:25:54.099986  320217 oci.go:144] the created container "no-preload-671514" has a running status.
	I0401 20:25:54.100028  320217 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa...
	I0401 20:25:54.435393  320217 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:25:54.485216  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:25:54.505540  320217 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:25:54.505560  320217 kic_runner.go:114] Args: [docker exec --privileged no-preload-671514 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:25:54.568030  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:25:54.591213  320217 machine.go:93] provisionDockerMachine start ...
	I0401 20:25:54.591290  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:54.624259  320217 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:54.624710  320217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0401 20:25:54.624733  320217 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:25:54.779491  320217 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671514
	
	I0401 20:25:54.779524  320217 ubuntu.go:169] provisioning hostname "no-preload-671514"
	I0401 20:25:54.779615  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:54.813686  320217 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:54.814107  320217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0401 20:25:54.814133  320217 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-671514 && echo "no-preload-671514" | sudo tee /etc/hostname
	I0401 20:25:54.980710  320217 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671514
	
	I0401 20:25:54.980804  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:55.007789  320217 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:55.008387  320217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0401 20:25:55.008538  320217 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-671514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-671514/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-671514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:25:55.162434  320217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:25:55.162465  320217 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:25:55.162494  320217 ubuntu.go:177] setting up certificates
	I0401 20:25:55.162510  320217 provision.go:84] configureAuth start
	I0401 20:25:55.162563  320217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:25:55.185985  320217 provision.go:143] copyHostCerts
	I0401 20:25:55.186045  320217 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:25:55.186056  320217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:25:55.186134  320217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:25:55.186299  320217 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:25:55.186313  320217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:25:55.186354  320217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:25:55.186424  320217 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:25:55.186430  320217 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:25:55.186461  320217 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:25:55.186523  320217 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.no-preload-671514 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-671514]
	I0401 20:25:55.324625  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:25:55.324657  320217 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.416670209s
	I0401 20:25:55.324674  320217 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:25:55.381037  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:25:55.381071  320217 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 2.472983211s
	I0401 20:25:55.381088  320217 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:25:55.396511  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:25:55.396540  320217 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 2.488630122s
	I0401 20:25:55.396552  320217 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:25:55.401742  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:25:55.401794  320217 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 2.493752779s
	I0401 20:25:55.401807  320217 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:25:55.518321  320217 provision.go:177] copyRemoteCerts
	I0401 20:25:55.518385  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:25:55.518557  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:55.539362  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:25:55.638955  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:25:55.664906  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:25:55.694098  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:25:55.721728  320217 provision.go:87] duration metric: took 559.205745ms to configureAuth
	I0401 20:25:55.721789  320217 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:25:55.721983  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:25:55.722005  320217 cache.go:157] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:25:55.722029  320217 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 2.81405577s
	I0401 20:25:55.722044  320217 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:25:55.722064  320217 cache.go:87] Successfully saved all images to host disk.
	I0401 20:25:55.722121  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:55.741163  320217 main.go:141] libmachine: Using SSH client type: native
	I0401 20:25:55.741384  320217 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0401 20:25:55.741406  320217 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:25:55.982463  320217 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:25:55.982492  320217 machine.go:96] duration metric: took 1.391257334s to provisionDockerMachine
	I0401 20:25:55.982504  320217 client.go:171] duration metric: took 3.049015877s to LocalClient.Create
	I0401 20:25:55.982528  320217 start.go:167] duration metric: took 3.049073559s to libmachine.API.Create "no-preload-671514"
	I0401 20:25:55.982537  320217 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:25:55.982551  320217 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:25:55.982621  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:25:55.982665  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:56.003898  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:25:56.108735  320217 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:25:56.113062  320217 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:25:56.113105  320217 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:25:56.113118  320217 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:25:56.113130  320217 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:25:56.113144  320217 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:25:56.113208  320217 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:25:56.113312  320217 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:25:56.113440  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:25:56.123246  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:25:56.150713  320217 start.go:296] duration metric: took 168.161393ms for postStartSetup
	I0401 20:25:56.151034  320217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:25:56.170258  320217 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:25:56.170578  320217 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:25:56.170634  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:56.191761  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:25:56.290827  320217 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:25:56.295270  320217 start.go:128] duration metric: took 3.364095401s to createHost
	I0401 20:25:56.295322  320217 start.go:83] releasing machines lock for "no-preload-671514", held for 3.364237138s
	I0401 20:25:56.295388  320217 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:25:56.317400  320217 ssh_runner.go:195] Run: cat /version.json
	I0401 20:25:56.317446  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:56.317454  320217 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:25:56.317504  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:25:56.349565  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:25:56.349566  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:25:56.531769  320217 ssh_runner.go:195] Run: systemctl --version
	I0401 20:25:56.538056  320217 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:25:56.695765  320217 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:25:56.701526  320217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:25:56.727232  320217 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:25:56.727316  320217 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:25:56.767303  320217 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:25:56.767323  320217 start.go:495] detecting cgroup driver to use...
	I0401 20:25:56.767353  320217 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:25:56.767389  320217 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:25:56.784889  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:25:56.802816  320217 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:25:56.802874  320217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:25:56.826271  320217 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:25:56.850793  320217 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:25:56.963885  320217 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:25:57.082040  320217 docker.go:233] disabling docker service ...
	I0401 20:25:57.082108  320217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:25:57.107389  320217 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:25:57.122413  320217 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:25:57.232628  320217 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:25:57.343594  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:25:57.359986  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:25:57.383817  320217 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:25:57.383892  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.397955  320217 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:25:57.398024  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.412480  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.425646  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.438604  320217 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:25:57.451765  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.466347  320217 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.489425  320217 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:25:57.503010  320217 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:25:57.514175  320217 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:25:57.522453  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:25:57.616577  320217 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:01.174079  320217 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.55746327s)
	I0401 20:26:01.174128  320217 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:01.174176  320217 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:01.178129  320217 start.go:563] Will wait 60s for crictl version
	I0401 20:26:01.178182  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.182091  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:01.233144  320217 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:01.233225  320217 ssh_runner.go:195] Run: crio --version
	I0401 20:26:01.281316  320217 ssh_runner.go:195] Run: crio --version
	I0401 20:26:01.333300  320217 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:01.334861  320217 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:01.359307  320217 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:01.363616  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:01.374548  320217 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:01.374649  320217 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:01.374689  320217 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:01.419153  320217 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
	I0401 20:26:01.419179  320217 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.2 registry.k8s.io/kube-controller-manager:v1.32.2 registry.k8s.io/kube-scheduler:v1.32.2 registry.k8s.io/kube-proxy:v1.32.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:26:01.419245  320217 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.419270  320217 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.419284  320217 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:01.419297  320217 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.419248  320217 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:01.419321  320217 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.419303  320217 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0401 20:26:01.419345  320217 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:01.420522  320217 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:01.420668  320217 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.420702  320217 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:01.420726  320217 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.420762  320217 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:01.420822  320217 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.420864  320217 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.420897  320217 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0401 20:26:01.593839  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.594556  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.598152  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.602130  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0401 20:26:01.663437  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.694097  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:01.726426  320217 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.2" does not exist at hash "b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389" in container runtime
	I0401 20:26:01.726479  320217 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.726519  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.726625  320217 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0401 20:26:01.726646  320217 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.726672  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.726740  320217 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.2" does not exist at hash "d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d" in container runtime
	I0401 20:26:01.726763  320217 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.726790  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.729415  320217 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0401 20:26:01.729463  320217 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0401 20:26:01.729498  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.777958  320217 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.2" does not exist at hash "85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef" in container runtime
	I0401 20:26:01.778003  320217 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.778043  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.780458  320217 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I0401 20:26:01.780491  320217 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:01.780527  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:01.780607  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.780643  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.780695  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.780744  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0401 20:26:01.782364  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:01.784529  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.955955  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0401 20:26:01.956019  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:01.956075  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:01.956131  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:01.956192  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:01.982128  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:01.982260  320217 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.2" needs transfer: "registry.k8s.io/kube-proxy:v1.32.2" does not exist at hash "f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5" in container runtime
	I0401 20:26:01.982296  320217 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:01.982330  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:02.173713  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0401 20:26:02.173839  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0401 20:26:02.173903  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:02.173971  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.2
	I0401 20:26:02.174032  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.2
	I0401 20:26:02.174079  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:02.174179  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.2
	I0401 20:26:02.378803  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2
	I0401 20:26:02.378865  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0401 20:26:02.378906  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0401 20:26:02.378954  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0401 20:26:02.378953  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0401 20:26:02.379004  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I0401 20:26:02.379012  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0401 20:26:02.379043  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2
	I0401 20:26:02.379059  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2
	I0401 20:26:02.379098  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0401 20:26:02.379104  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0401 20:26:02.379170  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:02.445095  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0401 20:26:02.445130  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0401 20:26:02.445194  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.32.2': No such file or directory
	I0401 20:26:02.445206  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 --> /var/lib/minikube/images/kube-apiserver_v1.32.2 (28680704 bytes)
	I0401 20:26:02.445240  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0401 20:26:02.445255  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0401 20:26:02.445329  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.32.2': No such file or directory
	I0401 20:26:02.445341  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 --> /var/lib/minikube/images/kube-controller-manager_v1.32.2 (26269696 bytes)
	I0401 20:26:02.445371  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.32.2': No such file or directory
	I0401 20:26:02.445379  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 --> /var/lib/minikube/images/kube-scheduler_v1.32.2 (20667904 bytes)
	I0401 20:26:02.445592  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I0401 20:26:02.445683  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I0401 20:26:02.454174  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.2
	I0401 20:26:02.486828  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.16-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.16-0': No such file or directory
	I0401 20:26:02.486869  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 --> /var/lib/minikube/images/etcd_3.5.16-0 (57690112 bytes)
	I0401 20:26:02.568057  320217 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10
	I0401 20:26:02.568551  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10
	I0401 20:26:02.630603  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2
	I0401 20:26:02.630717  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2
	I0401 20:26:02.659840  320217 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:02.877531  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.32.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.32.2': No such file or directory
	I0401 20:26:02.877572  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 --> /var/lib/minikube/images/kube-proxy_v1.32.2 (30910464 bytes)
	I0401 20:26:02.877924  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0401 20:26:02.877955  320217 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0401 20:26:02.878001  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0401 20:26:02.878439  320217 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0401 20:26:02.878487  320217 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:02.878534  320217 ssh_runner.go:195] Run: which crictl
	I0401 20:26:04.785997  320217 ssh_runner.go:235] Completed: which crictl: (1.907440906s)
	I0401 20:26:04.786055  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:04.786281  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.908260265s)
	I0401 20:26:04.786297  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0401 20:26:04.786316  320217 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0401 20:26:04.786347  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.2
	I0401 20:26:06.658261  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.2: (1.871890194s)
	I0401 20:26:06.658289  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 from cache
	I0401 20:26:06.658320  320217 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0401 20:26:06.658382  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.2
	I0401 20:26:06.658322  320217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.872249432s)
	I0401 20:26:06.658460  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:09.417084  320217 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.758606235s)
	I0401 20:26:09.417169  320217 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:09.417085  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.2: (2.75867224s)
	I0401 20:26:09.417244  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 from cache
	I0401 20:26:09.417283  320217 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0401 20:26:09.417330  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.2
	I0401 20:26:09.461764  320217 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0401 20:26:09.461870  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0401 20:26:11.168986  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.2: (1.751634287s)
	I0401 20:26:11.169015  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 from cache
	I0401 20:26:11.169039  320217 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.2
	I0401 20:26:11.169099  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.2
	I0401 20:26:11.169039  320217 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.707148259s)
	I0401 20:26:11.169173  320217 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0401 20:26:11.169203  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0401 20:26:13.035834  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.2: (1.86671037s)
	I0401 20:26:13.035864  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 from cache
	I0401 20:26:13.035889  320217 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I0401 20:26:13.035937  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I0401 20:26:17.745319  320217 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (4.709351353s)
	I0401 20:26:17.745351  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I0401 20:26:17.745370  320217 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0401 20:26:17.745415  320217 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	* 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320994,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:53.725412829Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551a0a4bf7c626f1683950daf2267c02a0c1a380ba131a8e8d82e662c41d9ec3",
	            "SandboxKey": "/var/run/docker/netns/551a0a4bf7c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a6:70:db:fd:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "1a7e5caa72d88eb8737c228beb2c5614aedde15b52d06379ca4b1c60e6b9f6aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
E0401 20:30:38.802193   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
E0401 20:30:38.998450   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 01 20:26:37 no-preload-671514 crio[1038]: time="2025-04-01 20:26:37.819961913Z" level=info msg="Started container" PID=2864 containerID=85c1e320d180bbd0088975d6a178f8be6cd9d4bc212333659d16d82afc49e614 description=kube-system/kube-proxy-pfvch/kube-proxy id=5be2a6f1-4775-4a99-9f0a-94c7a7a79e31 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ef8085608dab399a4404ddc1c7bdafa6e31a1b84736a959c33ae5867dc8b716
	Apr 01 20:27:10 no-preload-671514 crio[1038]: time="2025-04-01 20:27:10.314230062Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=88d9d7be-7d6f-4290-aac8-67811a5e2842 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:10 no-preload-671514 crio[1038]: time="2025-04-01 20:27:10.314498634Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=88d9d7be-7d6f-4290-aac8-67811a5e2842 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:21 no-preload-671514 crio[1038]: time="2025-04-01 20:27:21.242269780Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f5abf4b2-d01a-4239-9c32-2c606e0e7970 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:21 no-preload-671514 crio[1038]: time="2025-04-01 20:27:21.242517031Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=f5abf4b2-d01a-4239-9c32-2c606e0e7970 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:21 no-preload-671514 crio[1038]: time="2025-04-01 20:27:21.243079169Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2e496d39-e49b-45d2-8915-1cad284f36f5 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:27:21 no-preload-671514 crio[1038]: time="2025-04-01 20:27:21.244261555Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:28:08 no-preload-671514 crio[1038]: time="2025-04-01 20:28:08.242465717Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=be90e7ff-8965-4103-b412-10a6e3fdebf3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:08 no-preload-671514 crio[1038]: time="2025-04-01 20:28:08.242782844Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=be90e7ff-8965-4103-b412-10a6e3fdebf3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:21 no-preload-671514 crio[1038]: time="2025-04-01 20:28:21.241777782Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8c9223e8-2a26-4d15-bf89-9118c948583c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:21 no-preload-671514 crio[1038]: time="2025-04-01 20:28:21.242070458Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8c9223e8-2a26-4d15-bf89-9118c948583c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:21 no-preload-671514 crio[1038]: time="2025-04-01 20:28:21.242628736Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=29da4837-1883-4943-8196-39b553fbb805 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:28:21 no-preload-671514 crio[1038]: time="2025-04-01 20:28:21.243902121Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:06 no-preload-671514 crio[1038]: time="2025-04-01 20:29:06.242702784Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=75bdcb78-b0cc-4589-8b46-742e9c93548a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:06 no-preload-671514 crio[1038]: time="2025-04-01 20:29:06.242990884Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=75bdcb78-b0cc-4589-8b46-742e9c93548a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:18 no-preload-671514 crio[1038]: time="2025-04-01 20:29:18.241962453Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cf73edbc-b76b-4110-b398-c1a23b3d6335 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:18 no-preload-671514 crio[1038]: time="2025-04-01 20:29:18.242257466Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cf73edbc-b76b-4110-b398-c1a23b3d6335 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:32 no-preload-671514 crio[1038]: time="2025-04-01 20:29:32.242635128Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c21c8616-8dd2-44db-8fa3-930a941487e2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:32 no-preload-671514 crio[1038]: time="2025-04-01 20:29:32.242850261Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c21c8616-8dd2-44db-8fa3-930a941487e2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:47 no-preload-671514 crio[1038]: time="2025-04-01 20:29:47.242472050Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f7dfe719-a81c-4e07-9264-21b11183acd6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:47 no-preload-671514 crio[1038]: time="2025-04-01 20:29:47.242709463Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=f7dfe719-a81c-4e07-9264-21b11183acd6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:47 no-preload-671514 crio[1038]: time="2025-04-01 20:29:47.243238059Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4a212c94-3189-4362-b7b0-2983f0df941e name=/runtime.v1.ImageService/PullImage
	Apr 01 20:29:47 no-preload-671514 crio[1038]: time="2025-04-01 20:29:47.244420193Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:30:30 no-preload-671514 crio[1038]: time="2025-04-01 20:30:30.242323085Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a9458d2c-b2b1-4164-b8c2-d3426c66f103 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:30:30 no-preload-671514 crio[1038]: time="2025-04-01 20:30:30.242629523Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a9458d2c-b2b1-4164-b8c2-d3426c66f103 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85c1e320d180b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   4 minutes ago       Running             kube-proxy                0                   8ef8085608dab       kube-proxy-pfvch
	b0aca46f57421       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   0                   d6eb0bc2d9faa       kube-controller-manager-no-preload-671514
	b1305e045e585       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            0                   7f48b88c185a1       kube-apiserver-no-preload-671514
	b23ca2b60aaee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            0                   2269c2f962a90       kube-scheduler-no-preload-671514
	a09569ee98d25       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      0                   313adeb65123a       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:30:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:29:56 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:29:56 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:29:56 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:29:56 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3cd2d371a346a59dfa1024d7cfa972
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m7s
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m2s
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 4m1s  kube-proxy       
	  Normal   Starting                 4m7s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m7s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m7s  kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m7s  kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m7s  kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m3s  node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [a09569ee98d25b8797a01583cf6bb9cf3fe3b924561e718c16c33790406ba75f] <==
	{"level":"info","ts":"2025-04-01T20:26:27.059624Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:26:27.060170Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:27.059810Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:27.060933Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:27.060798Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:27.147043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.148230Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.148768Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:27.148843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.149010Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149153Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.150349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.151183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:26:27.151297Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.152062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 20:30:39 up  1:13,  0 users,  load average: 0.36, 2.77, 2.49
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [b1305e045e585214e298aab4fd349ff7d954cc6f0d1e21c68ba6f8661dca4d35] <==
	I0401 20:26:29.756705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:29.756712       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:29.819873       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0401 20:26:29.822664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:29.822700       1 policy_source.go:240] refreshing policies
	I0401 20:26:29.845121       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:29.846052       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:29.846334       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:26:29.846348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:26:29.918153       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:30.638898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:30.642611       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:30.642630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:31.117588       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:31.154903       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:31.247406       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:31.253764       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0401 20:26:31.255167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:31.259965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:31.747957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:32.159479       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:32.172748       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:32.181425       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:37.047528       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:26:37.096719       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b0aca46f57421e96e35baa84bcdcd9a6bad97eecb63ba229e036b31284013db3] <==
	I0401 20:26:36.197623       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:36.200100       1 shared_informer.go:320] Caches are synced for node
	I0401 20:26:36.200176       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:26:36.200249       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:26:36.200261       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:26:36.200269       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:26:36.206895       1 shared_informer.go:320] Caches are synced for namespace
	I0401 20:26:36.208451       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-671514" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:36.208482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.208520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.209406       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:36.261522       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0401 20:26:36.292706       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:36.292756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:26:36.292766       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:26:36.367026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:37.266267       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:37.450979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="351.026065ms"
	I0401 20:26:37.543105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.050087ms"
	I0401 20:26:37.543243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.483µs"
	I0401 20:26:38.246138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="26.287677ms"
	I0401 20:26:38.269291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.910701ms"
	I0401 20:26:38.271288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.904763ms"
	I0401 20:26:38.271582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="184.754µs"
	I0401 20:29:56.854082       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	
	
	==> kube-proxy [85c1e320d180bbd0088975d6a178f8be6cd9d4bc212333659d16d82afc49e614] <==
	I0401 20:26:37.949549       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:38.161117       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:26:38.161200       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:38.192676       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:38.192754       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:38.226172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:38.226996       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:38.227319       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:38.229729       1 config.go:199] "Starting service config controller"
	I0401 20:26:38.229801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:38.229841       1 config.go:329] "Starting node config controller"
	I0401 20:26:38.237960       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:38.230235       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:38.238081       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:38.333244       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:38.343398       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:38.346335       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b23ca2b60aaee9f0d3c9d088f7ba444675fd1621dfc819621355bfa1d77ccdfb] <==
	W0401 20:26:29.834918       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:29.834950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:29.835049       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835293       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:29.835324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835415       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835478       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835574       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:29.835598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.838254       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:29.838318       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.680771       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:30.680814       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.817477       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:30.817608       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.834173       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:30.834218       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.911974       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:30.912043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.940767       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:30.940821       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0401 20:26:32.556366       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:29:52 no-preload-671514 kubelet[2620]: E0401 20:29:52.188198    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539392187984855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:52 no-preload-671514 kubelet[2620]: E0401 20:29:52.188243    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539392187984855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:52 no-preload-671514 kubelet[2620]: E0401 20:29:52.203531    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:29:57 no-preload-671514 kubelet[2620]: E0401 20:29:57.204760    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:02 no-preload-671514 kubelet[2620]: E0401 20:30:02.189139    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539402188929268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:02 no-preload-671514 kubelet[2620]: E0401 20:30:02.189182    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539402188929268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:02 no-preload-671514 kubelet[2620]: E0401 20:30:02.206080    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:07 no-preload-671514 kubelet[2620]: E0401 20:30:07.207796    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:12 no-preload-671514 kubelet[2620]: E0401 20:30:12.190314    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539412190107552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:12 no-preload-671514 kubelet[2620]: E0401 20:30:12.190347    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539412190107552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:12 no-preload-671514 kubelet[2620]: E0401 20:30:12.209323    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:17 no-preload-671514 kubelet[2620]: E0401 20:30:17.210865    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:18 no-preload-671514 kubelet[2620]: E0401 20:30:18.996826    2620 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:18 no-preload-671514 kubelet[2620]: E0401 20:30:18.996909    2620 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:18 no-preload-671514 kubelet[2620]: E0401 20:30:18.997069    2620 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{Vo
lumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,
StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-5tgtq_kube-system(60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5): ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:30:18 no-preload-671514 kubelet[2620]: E0401 20:30:18.998260    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:30:22 no-preload-671514 kubelet[2620]: E0401 20:30:22.191491    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539422191307417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:22 no-preload-671514 kubelet[2620]: E0401 20:30:22.191536    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539422191307417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:22 no-preload-671514 kubelet[2620]: E0401 20:30:22.211844    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:27 no-preload-671514 kubelet[2620]: E0401 20:30:27.213003    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:30 no-preload-671514 kubelet[2620]: E0401 20:30:30.242898    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:30:32 no-preload-671514 kubelet[2620]: E0401 20:30:32.193553    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539432193334944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:32 no-preload-671514 kubelet[2620]: E0401 20:30:32.193596    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539432193334944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:32 no-preload-671514 kubelet[2620]: E0401 20:30:32.214578    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:37 no-preload-671514 kubelet[2620]: E0401 20:30:37.215409    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
E0401 20:30:40.083990   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E0401 20:30:40.280291   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1 (62.272175ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (287.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (274.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m32.703964835s)

                                                
                                                
-- stdout --
	* [embed-certs-974821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "embed-certs-974821" primary control-plane node in "embed-certs-974821" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:26:10.700450  330894 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:10.700743  330894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:10.700772  330894 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:10.700784  330894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:10.700993  330894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:10.701742  330894 out.go:352] Setting JSON to false
	I0401 20:26:10.703224  330894 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4117,"bootTime":1743535054,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:10.703283  330894 start.go:139] virtualization: kvm guest
	I0401 20:26:10.705485  330894 out.go:177] * [embed-certs-974821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:10.707434  330894 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:10.707435  330894 notify.go:220] Checking for updates...
	I0401 20:26:10.710251  330894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:10.711589  330894 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:10.712916  330894 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:10.714362  330894 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:10.715774  330894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:10.719624  330894 config.go:182] Loaded profile config "flannel-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:10.719743  330894 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:10.719840  330894 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:10.719956  330894 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:10.753475  330894 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:10.753592  330894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:10.822850  330894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:true NGoroutines:98 SystemTime:2025-04-01 20:26:10.811667658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:10.823012  330894 docker.go:318] overlay module found
	I0401 20:26:10.824821  330894 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:10.826200  330894 start.go:297] selected driver: docker
	I0401 20:26:10.826221  330894 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:10.826237  330894 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:10.827403  330894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:10.894509  330894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:true NGoroutines:98 SystemTime:2025-04-01 20:26:10.877064608 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:10.894736  330894 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:10.895020  330894 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:10.897125  330894 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:10.898499  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:10.898572  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:10.898586  330894 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:10.898692  330894 start.go:340] cluster config:
	{Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:10.900325  330894 out.go:177] * Starting "embed-certs-974821" primary control-plane node in "embed-certs-974821" cluster
	I0401 20:26:10.901840  330894 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:10.903317  330894 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:10.904622  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:10.904681  330894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:10.904695  330894 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:10.904678  330894 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:10.904799  330894 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:10.904813  330894 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:10.904915  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:10.904937  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json: {Name:mk57fe49988b5f9c93e535ef5a7cb41d7b31b1dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:10.928512  330894 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:10.928541  330894 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:10.928556  330894 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:10.928591  330894 start.go:360] acquireMachinesLock for embed-certs-974821: {Name:mk504873d11b3a69d78cbbe682dafb679598342b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:10.928724  330894 start.go:364] duration metric: took 108.006µs to acquireMachinesLock for "embed-certs-974821"
	I0401 20:26:10.928758  330894 start.go:93] Provisioning new machine with config: &{Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:10.928865  330894 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:10.930736  330894 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:10.931014  330894 start.go:159] libmachine.API.Create for "embed-certs-974821" (driver="docker")
	I0401 20:26:10.931051  330894 client.go:168] LocalClient.Create starting
	I0401 20:26:10.931133  330894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:10.931171  330894 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:10.931199  330894 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:10.931289  330894 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:10.931323  330894 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:10.931340  330894 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:10.931779  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:10.954686  330894 cli_runner.go:211] docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:10.954773  330894 network_create.go:284] running [docker network inspect embed-certs-974821] to gather additional debugging logs...
	I0401 20:26:10.954803  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821
	W0401 20:26:10.976994  330894 cli_runner.go:211] docker network inspect embed-certs-974821 returned with exit code 1
	I0401 20:26:10.977032  330894 network_create.go:287] error running [docker network inspect embed-certs-974821]: docker network inspect embed-certs-974821: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-974821 not found
	I0401 20:26:10.977045  330894 network_create.go:289] output of [docker network inspect embed-certs-974821]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-974821 not found
	
	** /stderr **
	I0401 20:26:10.977159  330894 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:11.004230  330894 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:11.005280  330894 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:11.006478  330894 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:11.007982  330894 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:11.008848  330894 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:11.010463  330894 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c4a330}
	I0401 20:26:11.010498  330894 network_create.go:124] attempt to create docker network embed-certs-974821 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0401 20:26:11.010554  330894 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-974821 embed-certs-974821
	I0401 20:26:11.071372  330894 network_create.go:108] docker network embed-certs-974821 192.168.94.0/24 created
	I0401 20:26:11.071543  330894 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-974821" container
	I0401 20:26:11.071636  330894 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:11.093280  330894 cli_runner.go:164] Run: docker volume create embed-certs-974821 --label name.minikube.sigs.k8s.io=embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:11.113809  330894 oci.go:103] Successfully created a docker volume embed-certs-974821
	I0401 20:26:11.113877  330894 cli_runner.go:164] Run: docker run --rm --name embed-certs-974821-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --entrypoint /usr/bin/test -v embed-certs-974821:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:11.823991  330894 oci.go:107] Successfully prepared a docker volume embed-certs-974821
	I0401 20:26:11.824032  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:11.824055  330894 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:11.824130  330894 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	* 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:16.922485679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89edf444d031870b678606c3dab14cec64f5db6770fe8f67ec9b313ab700bd50",
	            "SandboxKey": "/var/run/docker/netns/89edf444d031",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:e2:72:9d:20:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "8c07b01949d42e8f17c50ba6d828c0850ad6e031d8825f2ba64c77c1d4a405fd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25: (1.079261887s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	
	
	==> CRI-O <==
	Apr 01 20:26:42 embed-certs-974821 crio[1032]: time="2025-04-01 20:26:42.277869589Z" level=info msg="Created container dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a: kube-system/kube-proxy-gn6mh/kube-proxy" id=8078dbf3-c099-4b9e-99f9-64e49922ad7a name=/runtime.v1.RuntimeService/CreateContainer
	Apr 01 20:26:42 embed-certs-974821 crio[1032]: time="2025-04-01 20:26:42.279079396Z" level=info msg="Starting container: dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a" id=cc211dcd-a4fb-490e-a86a-b5cd16e0a654 name=/runtime.v1.RuntimeService/StartContainer
	Apr 01 20:26:42 embed-certs-974821 crio[1032]: time="2025-04-01 20:26:42.289497364Z" level=info msg="Started container" PID=1834 containerID=dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a description=kube-system/kube-proxy-gn6mh/kube-proxy id=cc211dcd-a4fb-490e-a86a-b5cd16e0a654 name=/runtime.v1.RuntimeService/StartContainer sandboxID=149ac7d6539bc32bb99370722c47b7916fb2c406dec06f4b56886652994cf3e5
	Apr 01 20:27:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:30.103092955Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d133cc1f-e687-4604-80ea-026ce851dfb7 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:30.103392973Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d133cc1f-e687-4604-80ea-026ce851dfb7 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:40 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:40.948990189Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5be61827-2106-4265-8d96-be49f97d5117 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:40 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:40.949264356Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5be61827-2106-4265-8d96-be49f97d5117 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:40 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:40.949740659Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=558e57aa-0c05-49e7-8c3b-8a023512c39a name=/runtime.v1.ImageService/PullImage
	Apr 01 20:27:40 embed-certs-974821 crio[1032]: time="2025-04-01 20:27:40.963063883Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:28:23 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:23.948782014Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1e5ca62f-9c13-4a15-ac27-b5f23f1e8431 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:23 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:23.949065044Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1e5ca62f-9c13-4a15-ac27-b5f23f1e8431 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:35 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:35.948336896Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6df9e681-d692-4542-875b-69bbd890cefe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:35 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:35.948568256Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6df9e681-d692-4542-875b-69bbd890cefe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:35 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:35.949146311Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=03887f4c-c692-4b99-8b46-aa76c5b152c1 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:28:35 embed-certs-974821 crio[1032]: time="2025-04-01 20:28:35.961925830Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:20 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:20.948827752Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=b5382220-220c-4448-af36-cd60628e3c48 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:20 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:20.949128520Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=b5382220-220c-4448-af36-cd60628e3c48 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:34.949062344Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=da59b652-d180-4456-b638-b5a12ab75b25 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:34.949365040Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=da59b652-d180-4456-b638-b5a12ab75b25 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:49 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:49.948803241Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=58deb72d-a893-48f6-a242-718a30a8d21e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:49 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:49.949049025Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=58deb72d-a893-48f6-a242-718a30a8d21e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:49 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:49.949710147Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1a109043-9e5d-41d7-8ed0-01f39ffc64f4 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:29:49 embed-certs-974821 crio[1032]: time="2025-04-01 20:29:49.951918894Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:30:32 embed-certs-974821 crio[1032]: time="2025-04-01 20:30:32.948452287Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8fa55496-36a5-4edb-a5dd-492594ef3091 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:30:32 embed-certs-974821 crio[1032]: time="2025-04-01 20:30:32.948749756Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8fa55496-36a5-4edb-a5dd-492594ef3091 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dab987ff7f406       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   4 minutes ago       Running             kube-proxy                0                   149ac7d6539bc       kube-proxy-gn6mh
	132535ef7e958       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      0                   4731f2f1d181b       etcd-embed-certs-974821
	74706ee864871       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   0                   d15bcc723fd1f       kube-controller-manager-embed-certs-974821
	820d4cbf19595       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            0                   d173b3672c77c       kube-apiserver-embed-certs-974821
	7eaba18859263       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            0                   789d6e327dc78       kube-scheduler-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:30:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:26:37 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:26:37 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:26:37 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:26:37 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 089bcdcc4f154a62af892e7332fe1d3b
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m7s
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m3s
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 4m1s  kube-proxy       
	  Normal   Starting                 4m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m8s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m7s  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m7s  kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m7s  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m4s  node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [132535ef7e958754bdbf8341d8f37e53b56cb185ee74f78902764c4aaf5544ae] <==
	{"level":"info","ts":"2025-04-01T20:26:31.923065Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:26:31.923177Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:31.923237Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:31.923556Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:31.923632Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:32.664082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.665247Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-974821 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:32.665313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665616Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666258Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666367Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:32.667046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667079Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:30:44 up  1:13,  0 users,  load average: 0.33, 2.72, 2.48
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [820d4cbf19595741dcb7bf30a4333deced286f0e097e71b59aafcd4be0161d9d] <==
	I0401 20:26:34.517957       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0401 20:26:34.524149       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0401 20:26:34.531430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:34.533518       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:34.533607       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 20:26:34.542892       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:34.542994       1 policy_source.go:240] refreshing policies
	E0401 20:26:34.587040       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:34.624866       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:34.727984       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:35.340661       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:35.345732       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:35.345773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:35.814161       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:35.858128       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:35.960870       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:35.967529       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0401 20:26:35.968831       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:35.973430       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:36.450795       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:37.040714       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:37.058369       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:37.073730       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:41.052685       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:41.852837       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [74706ee86487117baef163b1da8dc8bd6bd6f7b6d9e5e299c0a2f4e7b089ab0c] <==
	I0401 20:26:41.000240       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:26:41.000246       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:26:41.000264       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:26:41.000582       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:41.000636       1 shared_informer.go:320] Caches are synced for cronjob
	I0401 20:26:41.000694       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:41.001404       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:41.001439       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:41.003713       1 shared_informer.go:320] Caches are synced for service account
	I0401 20:26:41.006902       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-974821" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:41.006931       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007090       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.007092       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:26:41.017896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:41.020103       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:26:41.026422       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.173290       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.358163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:42.119656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.064490878s"
	I0401 20:26:42.130355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.575326ms"
	I0401 20:26:42.130626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="125.57µs"
	I0401 20:26:43.227258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="158.555429ms"
	I0401 20:26:43.243190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.765842ms"
	I0401 20:26:43.246386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.123µs"
	
	
	==> kube-proxy [dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a] <==
	I0401 20:26:42.428649       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:42.664637       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:26:42.664720       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:42.864985       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:42.865059       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:42.867616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:42.868124       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:42.868224       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:42.869989       1 config.go:199] "Starting service config controller"
	I0401 20:26:42.870084       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:42.870303       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:42.870892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:42.870787       1 config.go:329] "Starting node config controller"
	I0401 20:26:42.871044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:42.970938       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:42.971176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:42.974999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7eaba18859263cff2209aeee6e1ec276f41b4d381c0ad36d0b34b5698e41351d] <==
	W0401 20:26:34.532713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:34.533076       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532738       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:34.533134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532794       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:34.533157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532850       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:34.533254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532864       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:34.533278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.533021       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:34.533298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.402058       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:35.402103       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:35.536405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:35.536453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.552040       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:35.552168       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.597795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:35.597857       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.624483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.624531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.630009       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.630051       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:38.324795       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:29:57 embed-certs-974821 kubelet[1655]: E0401 20:29:57.058406    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539397058138440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:57 embed-certs-974821 kubelet[1655]: E0401 20:29:57.058447    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539397058138440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:57 embed-certs-974821 kubelet[1655]: E0401 20:29:57.080956    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:02 embed-certs-974821 kubelet[1655]: E0401 20:30:02.082752    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:07 embed-certs-974821 kubelet[1655]: E0401 20:30:07.059403    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539407059188912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:07 embed-certs-974821 kubelet[1655]: E0401 20:30:07.059471    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539407059188912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:07 embed-certs-974821 kubelet[1655]: E0401 20:30:07.083921    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:12 embed-certs-974821 kubelet[1655]: E0401 20:30:12.085564    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:17 embed-certs-974821 kubelet[1655]: E0401 20:30:17.060525    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539417060355731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:17 embed-certs-974821 kubelet[1655]: E0401 20:30:17.060573    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539417060355731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:17 embed-certs-974821 kubelet[1655]: E0401 20:30:17.086965    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:21 embed-certs-974821 kubelet[1655]: E0401 20:30:21.708147    1655 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:21 embed-certs-974821 kubelet[1655]: E0401 20:30:21.708231    1655 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:21 embed-certs-974821 kubelet[1655]: E0401 20:30:21.708411    1655 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{V
olumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqrvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false
,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-bq54h_kube-system(f880d90a-5596-4ce4-b2e9-ab4094de1621): ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:30:21 embed-certs-974821 kubelet[1655]: E0401 20:30:21.709670    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:30:22 embed-certs-974821 kubelet[1655]: E0401 20:30:22.088575    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:27 embed-certs-974821 kubelet[1655]: E0401 20:30:27.061686    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539427061462378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:27 embed-certs-974821 kubelet[1655]: E0401 20:30:27.061728    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539427061462378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:27 embed-certs-974821 kubelet[1655]: E0401 20:30:27.090132    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:32 embed-certs-974821 kubelet[1655]: E0401 20:30:32.091679    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:32 embed-certs-974821 kubelet[1655]: E0401 20:30:32.949014    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:30:37 embed-certs-974821 kubelet[1655]: E0401 20:30:37.062780    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539437062556808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:37 embed-certs-974821 kubelet[1655]: E0401 20:30:37.062814    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539437062556808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:37 embed-certs-974821 kubelet[1655]: E0401 20:30:37.092338    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:42 embed-certs-974821 kubelet[1655]: E0401 20:30:42.093688    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1 (76.476903ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (274.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (267.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0401 20:26:29.190675   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.582981   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.589358   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.600690   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.622111   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.663480   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.744986   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:05.906801   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:06.228695   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:06.871019   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:08.152864   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:10.715014   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:15.837084   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:26.078977   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:26.124406   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.468385   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.474761   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.486128   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.507501   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.548912   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.630497   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:45.791984   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:46.113633   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:46.561248   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:46.754873   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:48.036969   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:50.598688   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:28:55.720464   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:05.962507   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.013218   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.019593   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.030935   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.052406   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.093820   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.175277   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.336813   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:07.659124   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:08.301412   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:09.583271   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:12.145372   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:17.267447   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:26.444082   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:27.509178   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:27.522881   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.790844   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.797268   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.808599   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.829961   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.871314   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:29.952672   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:30.114320   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:30.435959   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:31.077907   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:32.359397   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:34.921520   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:40.043140   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:47.990688   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:50.284702   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:53.251847   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.736157   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.742570   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.753922   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.775350   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.816776   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:56.898220   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:57.059877   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:57.381565   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:58.023649   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:29:59.305582   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:01.867714   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:06.989875   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:07.405974   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:10.766930   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:17.231755   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:28.952655   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.514737   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.521101   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.532972   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.554447   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.595852   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.677343   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.710797   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.713183   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.717625   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.728997   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.750339   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.791722   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.839122   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:37.873485   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:38.035342   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:38.160850   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:38.356654   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m25.786279608s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:44.456485  333931 node_ready.go:38] duration metric: took 4m0.003543817s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:30:44.458297  333931 out.go:201] 
	W0401 20:30:44.459571  333931 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:44.459594  333931 out.go:270] * 
	* 
	W0401 20:30:44.460727  333931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:44.461950  333931 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:24.363626089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e116c8681f9a446b4eb5781093640ab52b0549a1b9c009ec7c6caa169d37f052",
	            "SandboxKey": "/var/run/docker/netns/e116c8681f9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:ed:d0:09:db:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "cfed49f55c5786829041c1b4d8f3804c0fe9eba623f6b8950b4c8d49cc775ef9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.18378514s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:44.456485  333931 node_ready.go:38] duration metric: took 4m0.003543817s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:30:44.458297  333931 out.go:201] 
	W0401 20:30:44.459571  333931 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:44.459594  333931 out.go:270] * 
	W0401 20:30:44.460727  333931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:44.461950  333931 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:26:44 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:26:44.859837647Z" level=info msg="Started container" PID=1914 containerID=901ead14674ca902c80ccfab27785fd598218cda7bce2cad3a9ca70939f51f28 description=kube-system/kube-proxy-btnmc/kube-proxy id=51f6e580-3957-46b1-ade1-a7b1b762c3e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=afd16935a506b580076a065f6ae5ca1ca4c03cc1456f34e7add69b3a9a203ab9
	Apr 01 20:27:17 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:17.374366478Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e374108b-1e39-47be-b661-a813993110c2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:17 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:17.374643405Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e374108b-1e39-47be-b661-a813993110c2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:29 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:29.238708859Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cf135e38-5f8b-4c97-892f-b6431bc0f521 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:29 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:29.239050581Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cf135e38-5f8b-4c97-892f-b6431bc0f521 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:27:29 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:29.239541734Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ce810aab-5250-44a2-a80f-96ef4161ad6d name=/runtime.v1.ImageService/PullImage
	Apr 01 20:27:29 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:27:29.240815298Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:28:12 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:12.238907423Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5ae31ef7-0191-4851-b92d-9fbec57263c7 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:12 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:12.239174388Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5ae31ef7-0191-4851-b92d-9fbec57263c7 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:26.238850327Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=60a131f1-40f3-4caf-8ee3-47ba8a8554aa name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:26.239179563Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=60a131f1-40f3-4caf-8ee3-47ba8a8554aa name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:28:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:26.239752282Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d4cc295b-49a0-4099-bc84-f1bbad538a26 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:28:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:28:26.240982560Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:29:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:11.239406466Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=eccb245c-0d38-42a3-b8fb-c530aaca3a86 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:11.239721672Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=eccb245c-0d38-42a3-b8fb-c530aaca3a86 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:24 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:24.239839985Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=aef53056-d21f-4a2c-aec0-c8b6f24bd32d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:24 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:24.240092253Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=aef53056-d21f-4a2c-aec0-c8b6f24bd32d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:37 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:37.239503463Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=25aa5633-603a-4685-af42-75fa7bc8f660 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:37 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:37.239771145Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=25aa5633-603a-4685-af42-75fa7bc8f660 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:50 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:50.238729780Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7c5c2cbe-c63d-49bf-8d65-b26aedbd9644 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:50 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:50.238973164Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7c5c2cbe-c63d-49bf-8d65-b26aedbd9644 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:29:50 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:50.239591387Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=dcdc9781-9576-44a3-ba4f-b5ff944f8bd8 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:29:50 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:29:50.240702734Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:30:33 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:30:33.238978517Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=78cde3f2-6a79-4725-b246-789015b24843 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:30:33 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:30:33.239312268Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=78cde3f2-6a79-4725-b246-789015b24843 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	901ead14674ca       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   4 minutes ago       Running             kube-proxy                0                   afd16935a506b       kube-proxy-btnmc
	0582ac1eac9e7       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   0                   50a8fff230f0e       kube-controller-manager-default-k8s-diff-port-993330
	38f17c6d6c18d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            0                   9bfb2a6c26975       kube-apiserver-default-k8s-diff-port-993330
	21b9dbd8d6257       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            0                   f74b59a5b87b8       kube-scheduler-default-k8s-diff-port-993330
	265bcef800f65       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      0                   d24837c573a23       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:30:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:26:39 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:26:39 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:26:39 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:26:39 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f9efd91622a43ff8c62538d2a5dee6c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m6s
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m1s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m12s (x8 over 4m12s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m6s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m6s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m6s                   kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m6s                   kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m6s                   kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m2s                   node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [265bcef800f65f87a982f41760a50d05b8b471734d0c9eb3c0aedfa4ea71219e] <==
	{"level":"info","ts":"2025-04-01T20:26:34.534887Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:34.534978Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:34.534856Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-01T20:26:34.535313Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:34.535354Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:35.074806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.075737Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076316Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:35.076337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076690Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076729Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.077119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:35.118248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:30:45 up  1:13,  0 users,  load average: 0.33, 2.72, 2.48
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [38f17c6d6c18db0d9f10a0d87db28e50ce8bb1d3e5d521a5fb71b3b079328b39] <==
	I0401 20:26:36.918604       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:26:36.919358       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 20:26:36.919648       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:36.919688       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:26:36.919698       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:26:36.919706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:36.919713       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:36.922458       1 controller.go:615] quota admission added evaluator for: namespaces
	E0401 20:26:36.926891       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:36.977970       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:37.801366       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:37.809683       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:37.809825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:38.352965       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:38.395811       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:38.529158       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:38.534999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0401 20:26:38.536167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:38.541299       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:38.843867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:39.334365       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:39.350211       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:39.357875       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:44.028415       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:44.426310       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0582ac1eac9e7fe6cc9ae5fe1a2fdbca64dc6f2415721e0e6f9cd8e075c2f7ac] <==
	I0401 20:26:43.384680       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:26:43.384694       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:26:43.392867       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0401 20:26:43.392908       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:43.392986       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:26:43.393143       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:26:43.393352       1 shared_informer.go:320] Caches are synced for crt configmap
	I0401 20:26:43.393515       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0401 20:26:43.393537       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:26:43.393631       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0401 20:26:43.393845       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:26:43.393921       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:43.394241       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:43.394351       1 shared_informer.go:320] Caches are synced for TTL
	I0401 20:26:43.395317       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:43.395549       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:43.397813       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.398860       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.414109       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:44.334167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	I0401 20:26:44.628314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="595.822096ms"
	I0401 20:26:44.640479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.099169ms"
	I0401 20:26:44.645556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="5.030169ms"
	I0401 20:26:44.656890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.205665ms"
	I0401 20:26:44.656986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.73µs"
	
	
	==> kube-proxy [901ead14674ca902c80ccfab27785fd598218cda7bce2cad3a9ca70939f51f28] <==
	I0401 20:26:44.894601       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:44.998268       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:26:44.998336       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:45.018925       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:45.019003       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:45.021196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:45.021635       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:45.021671       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:45.023446       1 config.go:329] "Starting node config controller"
	I0401 20:26:45.023539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:45.023422       1 config.go:199] "Starting service config controller"
	I0401 20:26:45.023632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:45.023440       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:45.023689       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:45.124011       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:45.124014       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:45.124012       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [21b9dbd8d62576a9c01ee56d38988a8024ae1ee6a6c4d006a881f902776b6225] <==
	W0401 20:26:37.744064       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:37.744119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.795743       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:37.795891       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:37.853181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:37.853456       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.899070       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:37.899125       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.935695       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:37.935832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.974076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:37.974251       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.986936       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:37.986983       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.999704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:37.999872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.064871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.064927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.073640       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:38.073691       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.139325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.139494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.164184       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:38.164333       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:39.623914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:29:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:29:59.294762    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539399294539714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:29:59.294803    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539399294539714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:29:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:29:59.361986    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:04 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:04.363449    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:09.295835    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539409295620083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:09.295878    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539409295620083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:09.364095    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:14 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:14.365782    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:19.297147    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539419296956407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:19.297192    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539419296956407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:19.367040    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:21 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:21.995883    1652 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:21 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:21.995949    1652 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:30:21 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:21.996103    1652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]Vol
umeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-9xbmt_kube-system(68b2c7ae-356c-49af-994e-ada27ca91c66): ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:30:21 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:21.997288    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:30:24 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:24.368796    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:29.298245    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539429297998802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:29.298292    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539429297998802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:29.370495    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:33 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:33.239564    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:30:34 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:34.371994    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:39.299365    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539439299140811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:39.299410    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539439299140811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:30:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:39.373518    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:30:44 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:30:44.374250    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1 (60.416243ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (267.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (484.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-671514 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [88e6ef5b-5ac8-4fef-9eef-05b92dd3f5f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0401 20:30:42.645634   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:42.842168   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/DeployApp: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:194: ***** TestStartStop/group/no-preload/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
start_stop_delete_test.go:194: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
start_stop_delete_test.go:194: TestStartStop/group/no-preload/serial/DeployApp: showing logs for failed pods as of 2025-04-01 20:38:40.788646451 +0000 UTC m=+3206.389577885
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-671514 describe po busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context no-preload-671514 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-hxxvc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  2m36s (x2 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-671514 logs busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context no-preload-671514 logs busybox -n default:
start_stop_delete_test.go:194: wait: integration-test=busybox within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320994,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:53.725412829Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551a0a4bf7c626f1683950daf2267c02a0c1a380ba131a8e8d82e662c41d9ec3",
	            "SandboxKey": "/var/run/docker/netns/551a0a4bf7c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a6:70:db:fd:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "1a7e5caa72d88eb8737c228beb2c5614aedde15b52d06379ca4b1c60e6b9f6aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:44.456485  333931 node_ready.go:38] duration metric: took 4m0.003543817s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:30:44.458297  333931 out.go:201] 
	W0401 20:30:44.459571  333931 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:44.459594  333931 out.go:270] * 
	W0401 20:30:44.460727  333931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:44.461950  333931 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:36:02 no-preload-671514 crio[1038]: time="2025-04-01 20:36:02.242495565Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e8b770d6-2421-4008-b988-14e3e4781c21 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:13 no-preload-671514 crio[1038]: time="2025-04-01 20:36:13.242153692Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=84a1276a-0b56-4975-a2c1-0f325f1a675d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:13 no-preload-671514 crio[1038]: time="2025-04-01 20:36:13.242441085Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=84a1276a-0b56-4975-a2c1-0f325f1a675d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:26 no-preload-671514 crio[1038]: time="2025-04-01 20:36:26.241947682Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=082a9212-b25f-4c20-9d09-5883ac478690 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:26 no-preload-671514 crio[1038]: time="2025-04-01 20:36:26.242272003Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=082a9212-b25f-4c20-9d09-5883ac478690 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:39 no-preload-671514 crio[1038]: time="2025-04-01 20:36:39.241614390Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e7778946-f108-40e9-8142-01e9107843e3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:39 no-preload-671514 crio[1038]: time="2025-04-01 20:36:39.241900500Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e7778946-f108-40e9-8142-01e9107843e3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:53 no-preload-671514 crio[1038]: time="2025-04-01 20:36:53.242293373Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=74b6cd94-18ab-48fe-9335-554e8a760b0a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:53 no-preload-671514 crio[1038]: time="2025-04-01 20:36:53.242608561Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=74b6cd94-18ab-48fe-9335-554e8a760b0a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:06 no-preload-671514 crio[1038]: time="2025-04-01 20:37:06.242473038Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e721ff08-1674-413d-b71e-ff837de3f2ef name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:06 no-preload-671514 crio[1038]: time="2025-04-01 20:37:06.242748246Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e721ff08-1674-413d-b71e-ff837de3f2ef name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:20 no-preload-671514 crio[1038]: time="2025-04-01 20:37:20.242721823Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=80b4384a-daae-4a9b-8ac2-0e8033b39ad1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:20 no-preload-671514 crio[1038]: time="2025-04-01 20:37:20.242994444Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=80b4384a-daae-4a9b-8ac2-0e8033b39ad1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:31 no-preload-671514 crio[1038]: time="2025-04-01 20:37:31.242552447Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d18b179b-3d8f-49f7-b59c-baabec63da7c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:31 no-preload-671514 crio[1038]: time="2025-04-01 20:37:31.242784985Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d18b179b-3d8f-49f7-b59c-baabec63da7c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:43 no-preload-671514 crio[1038]: time="2025-04-01 20:37:43.242306242Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1e87f864-bfea-4530-a754-90d72f51c63d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:43 no-preload-671514 crio[1038]: time="2025-04-01 20:37:43.242525778Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1e87f864-bfea-4530-a754-90d72f51c63d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:56 no-preload-671514 crio[1038]: time="2025-04-01 20:37:56.242718814Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1204c605-9387-46e8-b543-f8f785b97f4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:56 no-preload-671514 crio[1038]: time="2025-04-01 20:37:56.242984356Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1204c605-9387-46e8-b543-f8f785b97f4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:08 no-preload-671514 crio[1038]: time="2025-04-01 20:38:08.242259853Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=b5415e90-6cf8-49ae-affe-6d0495956391 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:08 no-preload-671514 crio[1038]: time="2025-04-01 20:38:08.242560587Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=b5415e90-6cf8-49ae-affe-6d0495956391 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:22 no-preload-671514 crio[1038]: time="2025-04-01 20:38:22.241820312Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e30e4a77-8a18-4e8f-b41a-0e45389aa9fd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:22 no-preload-671514 crio[1038]: time="2025-04-01 20:38:22.242109762Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e30e4a77-8a18-4e8f-b41a-0e45389aa9fd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:35 no-preload-671514 crio[1038]: time="2025-04-01 20:38:35.241945268Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=35ba7315-0171-41cd-b9cd-989371342bf5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:35 no-preload-671514 crio[1038]: time="2025-04-01 20:38:35.242170884Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=35ba7315-0171-41cd-b9cd-989371342bf5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85c1e320d180b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   8ef8085608dab       kube-proxy-pfvch
	b0aca46f57421       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   d6eb0bc2d9faa       kube-controller-manager-no-preload-671514
	b1305e045e585       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   7f48b88c185a1       kube-apiserver-no-preload-671514
	b23ca2b60aaee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   2269c2f962a90       kube-scheduler-no-preload-671514
	a09569ee98d25       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   313adeb65123a       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3cd2d371a346a59dfa1024d7cfa972
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [a09569ee98d25b8797a01583cf6bb9cf3fe3b924561e718c16c33790406ba75f] <==
	{"level":"info","ts":"2025-04-01T20:26:27.060933Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:27.060798Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:27.147043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.148230Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.148768Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:27.148843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.149010Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149153Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.150349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.151183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:26:27.151297Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.152062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-04-01T20:36:27.850510Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":505}
	{"level":"info","ts":"2025-04-01T20:36:27.855271Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":505,"took":"4.473198ms","hash":2229897876,"current-db-size-bytes":1290240,"current-db-size":"1.3 MB","current-db-size-in-use-bytes":1290240,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-04-01T20:36:27.855319Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2229897876,"revision":505,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:42 up  1:21,  0 users,  load average: 0.79, 0.91, 1.63
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [b1305e045e585214e298aab4fd349ff7d954cc6f0d1e21c68ba6f8661dca4d35] <==
	I0401 20:26:29.756705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:29.756712       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:29.819873       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0401 20:26:29.822664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:29.822700       1 policy_source.go:240] refreshing policies
	I0401 20:26:29.845121       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:29.846052       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:29.846334       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:26:29.846348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:26:29.918153       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:30.638898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:30.642611       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:30.642630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:31.117588       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:31.154903       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:31.247406       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:31.253764       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0401 20:26:31.255167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:31.259965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:31.747957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:32.159479       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:32.172748       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:32.181425       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:37.047528       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:26:37.096719       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b0aca46f57421e96e35baa84bcdcd9a6bad97eecb63ba229e036b31284013db3] <==
	I0401 20:26:36.200100       1 shared_informer.go:320] Caches are synced for node
	I0401 20:26:36.200176       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:26:36.200249       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:26:36.200261       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:26:36.200269       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:26:36.206895       1 shared_informer.go:320] Caches are synced for namespace
	I0401 20:26:36.208451       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-671514" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:36.208482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.208520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.209406       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:36.261522       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0401 20:26:36.292706       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:36.292756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:26:36.292766       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:26:36.367026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:37.266267       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:37.450979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="351.026065ms"
	I0401 20:26:37.543105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.050087ms"
	I0401 20:26:37.543243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.483µs"
	I0401 20:26:38.246138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="26.287677ms"
	I0401 20:26:38.269291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.910701ms"
	I0401 20:26:38.271288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.904763ms"
	I0401 20:26:38.271582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="184.754µs"
	I0401 20:29:56.854082       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:35:03.677585       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	
	
	==> kube-proxy [85c1e320d180bbd0088975d6a178f8be6cd9d4bc212333659d16d82afc49e614] <==
	I0401 20:26:37.949549       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:38.161117       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:26:38.161200       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:38.192676       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:38.192754       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:38.226172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:38.226996       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:38.227319       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:38.229729       1 config.go:199] "Starting service config controller"
	I0401 20:26:38.229801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:38.229841       1 config.go:329] "Starting node config controller"
	I0401 20:26:38.237960       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:38.230235       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:38.238081       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:38.333244       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:38.343398       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:38.346335       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b23ca2b60aaee9f0d3c9d088f7ba444675fd1621dfc819621355bfa1d77ccdfb] <==
	W0401 20:26:29.834918       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:29.834950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:29.835049       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835293       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:29.835324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835415       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835478       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835574       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:29.835598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.838254       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:29.838318       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.680771       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:30.680814       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.817477       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:30.817608       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.834173       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:30.834218       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.911974       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:30.912043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.940767       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:30.940821       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0401 20:26:32.556366       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:47 no-preload-671514 kubelet[2620]: E0401 20:37:47.321222    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:37:52 no-preload-671514 kubelet[2620]: E0401 20:37:52.246519    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539872246337862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:37:52 no-preload-671514 kubelet[2620]: E0401 20:37:52.246568    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539872246337862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:37:52 no-preload-671514 kubelet[2620]: E0401 20:37:52.322748    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:37:56 no-preload-671514 kubelet[2620]: E0401 20:37:56.243237    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:37:57 no-preload-671514 kubelet[2620]: E0401 20:37:57.324284    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.247459    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539882247258229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.247500    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539882247258229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.325390    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:07 no-preload-671514 kubelet[2620]: E0401 20:38:07.326429    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:08 no-preload-671514 kubelet[2620]: E0401 20:38:08.242827    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.248433    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539892248265721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.248474    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539892248265721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.327272    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:17 no-preload-671514 kubelet[2620]: E0401 20:38:17.328455    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.242377    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.249358    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539902249199486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.249398    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539902249199486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.329988    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:27 no-preload-671514 kubelet[2620]: E0401 20:38:27.330725    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.250911    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539912250729302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.250941    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539912250729302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.332062    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:35 no-preload-671514 kubelet[2620]: E0401 20:38:35.242471    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:37 no-preload-671514 kubelet[2620]: E0401 20:38:37.333216    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1 (67.958491ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hxxvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m38s (x2 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320994,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:53.725412829Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "551a0a4bf7c626f1683950daf2267c02a0c1a380ba131a8e8d82e662c41d9ec3",
	            "SandboxKey": "/var/run/docker/netns/551a0a4bf7c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:a6:70:db:fd:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "1a7e5caa72d88eb8737c228beb2c5614aedde15b52d06379ca4b1c60e6b9f6aa",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                              |         |         |                     |                     |
	| ssh     | -p bridge-460236 sudo crio                           | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                        |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                     | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                 |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                        | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                          |                              |         |         |                     |                     |
	|         | --all --full --no-pager                              |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                             |                              |         |         |                     |                     |
	|         | --no-pager                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                          |                              |         |         |                     |                     |
	|         | --full --no-pager                                    |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                               | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                        |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                               |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                    | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                   | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                         |                              |         |         |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                         |                              |         |         |                     |                     |
	|         | --memory=2200                                        |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |         |                     |                     |
	|         | --driver=docker                                      |                              |         |         |                     |                     |
	|         | --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                         |                              |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:44.456485  333931 node_ready.go:38] duration metric: took 4m0.003543817s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:30:44.458297  333931 out.go:201] 
	W0401 20:30:44.459571  333931 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:44.459594  333931 out.go:270] * 
	W0401 20:30:44.460727  333931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:44.461950  333931 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:36:02 no-preload-671514 crio[1038]: time="2025-04-01 20:36:02.242495565Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e8b770d6-2421-4008-b988-14e3e4781c21 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:13 no-preload-671514 crio[1038]: time="2025-04-01 20:36:13.242153692Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=84a1276a-0b56-4975-a2c1-0f325f1a675d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:13 no-preload-671514 crio[1038]: time="2025-04-01 20:36:13.242441085Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=84a1276a-0b56-4975-a2c1-0f325f1a675d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:26 no-preload-671514 crio[1038]: time="2025-04-01 20:36:26.241947682Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=082a9212-b25f-4c20-9d09-5883ac478690 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:26 no-preload-671514 crio[1038]: time="2025-04-01 20:36:26.242272003Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=082a9212-b25f-4c20-9d09-5883ac478690 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:39 no-preload-671514 crio[1038]: time="2025-04-01 20:36:39.241614390Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e7778946-f108-40e9-8142-01e9107843e3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:39 no-preload-671514 crio[1038]: time="2025-04-01 20:36:39.241900500Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e7778946-f108-40e9-8142-01e9107843e3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:53 no-preload-671514 crio[1038]: time="2025-04-01 20:36:53.242293373Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=74b6cd94-18ab-48fe-9335-554e8a760b0a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:53 no-preload-671514 crio[1038]: time="2025-04-01 20:36:53.242608561Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=74b6cd94-18ab-48fe-9335-554e8a760b0a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:06 no-preload-671514 crio[1038]: time="2025-04-01 20:37:06.242473038Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e721ff08-1674-413d-b71e-ff837de3f2ef name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:06 no-preload-671514 crio[1038]: time="2025-04-01 20:37:06.242748246Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e721ff08-1674-413d-b71e-ff837de3f2ef name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:20 no-preload-671514 crio[1038]: time="2025-04-01 20:37:20.242721823Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=80b4384a-daae-4a9b-8ac2-0e8033b39ad1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:20 no-preload-671514 crio[1038]: time="2025-04-01 20:37:20.242994444Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=80b4384a-daae-4a9b-8ac2-0e8033b39ad1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:31 no-preload-671514 crio[1038]: time="2025-04-01 20:37:31.242552447Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d18b179b-3d8f-49f7-b59c-baabec63da7c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:31 no-preload-671514 crio[1038]: time="2025-04-01 20:37:31.242784985Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d18b179b-3d8f-49f7-b59c-baabec63da7c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:43 no-preload-671514 crio[1038]: time="2025-04-01 20:37:43.242306242Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1e87f864-bfea-4530-a754-90d72f51c63d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:43 no-preload-671514 crio[1038]: time="2025-04-01 20:37:43.242525778Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1e87f864-bfea-4530-a754-90d72f51c63d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:56 no-preload-671514 crio[1038]: time="2025-04-01 20:37:56.242718814Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1204c605-9387-46e8-b543-f8f785b97f4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:56 no-preload-671514 crio[1038]: time="2025-04-01 20:37:56.242984356Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1204c605-9387-46e8-b543-f8f785b97f4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:08 no-preload-671514 crio[1038]: time="2025-04-01 20:38:08.242259853Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=b5415e90-6cf8-49ae-affe-6d0495956391 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:08 no-preload-671514 crio[1038]: time="2025-04-01 20:38:08.242560587Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=b5415e90-6cf8-49ae-affe-6d0495956391 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:22 no-preload-671514 crio[1038]: time="2025-04-01 20:38:22.241820312Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e30e4a77-8a18-4e8f-b41a-0e45389aa9fd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:22 no-preload-671514 crio[1038]: time="2025-04-01 20:38:22.242109762Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e30e4a77-8a18-4e8f-b41a-0e45389aa9fd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:35 no-preload-671514 crio[1038]: time="2025-04-01 20:38:35.241945268Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=35ba7315-0171-41cd-b9cd-989371342bf5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:35 no-preload-671514 crio[1038]: time="2025-04-01 20:38:35.242170884Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=35ba7315-0171-41cd-b9cd-989371342bf5 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	85c1e320d180b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   8ef8085608dab       kube-proxy-pfvch
	b0aca46f57421       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   d6eb0bc2d9faa       kube-controller-manager-no-preload-671514
	b1305e045e585       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   7f48b88c185a1       kube-apiserver-no-preload-671514
	b23ca2b60aaee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   2269c2f962a90       kube-scheduler-no-preload-671514
	a09569ee98d25       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   313adeb65123a       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:35:03 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3cd2d371a346a59dfa1024d7cfa972
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [a09569ee98d25b8797a01583cf6bb9cf3fe3b924561e718c16c33790406ba75f] <==
	{"level":"info","ts":"2025-04-01T20:26:27.060933Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:26:27.060798Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:27.147043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:27.147263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.147382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:27.148230Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.148768Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:27.148843Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.149010Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149091Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149153Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:27.149574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:27.149690Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:27.150349Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.151183Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:26:27.151297Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:27.152062Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-04-01T20:36:27.850510Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":505}
	{"level":"info","ts":"2025-04-01T20:36:27.855271Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":505,"took":"4.473198ms","hash":2229897876,"current-db-size-bytes":1290240,"current-db-size":"1.3 MB","current-db-size-in-use-bytes":1290240,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-04-01T20:36:27.855319Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2229897876,"revision":505,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:43 up  1:21,  0 users,  load average: 1.05, 0.96, 1.64
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [b1305e045e585214e298aab4fd349ff7d954cc6f0d1e21c68ba6f8661dca4d35] <==
	I0401 20:26:29.756705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:29.756712       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:29.819873       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0401 20:26:29.822664       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:29.822700       1 policy_source.go:240] refreshing policies
	I0401 20:26:29.845121       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:29.846052       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:29.846334       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0401 20:26:29.846348       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0401 20:26:29.918153       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:30.638898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:30.642611       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:30.642630       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:31.117588       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:31.154903       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:31.247406       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:31.253764       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0401 20:26:31.255167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:31.259965       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:31.747957       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:32.159479       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:32.172748       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:32.181425       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:37.047528       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0401 20:26:37.096719       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b0aca46f57421e96e35baa84bcdcd9a6bad97eecb63ba229e036b31284013db3] <==
	I0401 20:26:36.200100       1 shared_informer.go:320] Caches are synced for node
	I0401 20:26:36.200176       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0401 20:26:36.200249       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0401 20:26:36.200261       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0401 20:26:36.200269       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0401 20:26:36.206895       1 shared_informer.go:320] Caches are synced for namespace
	I0401 20:26:36.208451       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-671514" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:36.208482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.208520       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:36.209406       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:36.261522       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0401 20:26:36.292706       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:36.292756       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:26:36.292766       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:26:36.367026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:37.266267       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:26:37.450979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="351.026065ms"
	I0401 20:26:37.543105       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.050087ms"
	I0401 20:26:37.543243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.483µs"
	I0401 20:26:38.246138       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="26.287677ms"
	I0401 20:26:38.269291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.910701ms"
	I0401 20:26:38.271288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.904763ms"
	I0401 20:26:38.271582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="184.754µs"
	I0401 20:29:56.854082       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	I0401 20:35:03.677585       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	
	
	==> kube-proxy [85c1e320d180bbd0088975d6a178f8be6cd9d4bc212333659d16d82afc49e614] <==
	I0401 20:26:37.949549       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:38.161117       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:26:38.161200       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:38.192676       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:38.192754       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:38.226172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:38.226996       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:38.227319       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:38.229729       1 config.go:199] "Starting service config controller"
	I0401 20:26:38.229801       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:38.229841       1 config.go:329] "Starting node config controller"
	I0401 20:26:38.237960       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:38.230235       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:38.238081       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:38.333244       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:38.343398       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:38.346335       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b23ca2b60aaee9f0d3c9d088f7ba444675fd1621dfc819621355bfa1d77ccdfb] <==
	W0401 20:26:29.834918       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:29.834950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835026       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:29.835049       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835293       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:29.835324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835415       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:29.835478       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.835574       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:29.835598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:29.838254       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:29.838318       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.680771       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:30.680814       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.817477       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:30.817608       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.834173       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:30.834218       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.911974       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:30.912043       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:30.940767       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:30.940821       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0401 20:26:32.556366       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:52 no-preload-671514 kubelet[2620]: E0401 20:37:52.322748    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:37:56 no-preload-671514 kubelet[2620]: E0401 20:37:56.243237    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:37:57 no-preload-671514 kubelet[2620]: E0401 20:37:57.324284    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.247459    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539882247258229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.247500    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539882247258229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:02 no-preload-671514 kubelet[2620]: E0401 20:38:02.325390    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:07 no-preload-671514 kubelet[2620]: E0401 20:38:07.326429    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:08 no-preload-671514 kubelet[2620]: E0401 20:38:08.242827    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.248433    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539892248265721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.248474    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539892248265721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:12 no-preload-671514 kubelet[2620]: E0401 20:38:12.327272    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:17 no-preload-671514 kubelet[2620]: E0401 20:38:17.328455    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.242377    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.249358    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539902249199486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.249398    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539902249199486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:22 no-preload-671514 kubelet[2620]: E0401 20:38:22.329988    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:27 no-preload-671514 kubelet[2620]: E0401 20:38:27.330725    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.250911    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539912250729302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.250941    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539912250729302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:32 no-preload-671514 kubelet[2620]: E0401 20:38:32.332062    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:35 no-preload-671514 kubelet[2620]: E0401 20:38:35.242471    2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:38:37 no-preload-671514 kubelet[2620]: E0401 20:38:37.333216    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:42 no-preload-671514 kubelet[2620]: E0401 20:38:42.252506    2620 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539922252252487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:42 no-preload-671514 kubelet[2620]: E0401 20:38:42.252552    2620 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539922252252487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:42 no-preload-671514 kubelet[2620]: E0401 20:38:42.334681    2620 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1 (67.989488ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hxxvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m40s (x2 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (484.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (485.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-974821 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d756ccb7-a70b-4ca0-9fce-6e16e7005e93] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:329: TestStartStop/group/embed-certs/serial/DeployApp: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:194: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
start_stop_delete_test.go:194: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
start_stop_delete_test.go:194: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2025-04-01 20:38:45.967721553 +0000 UTC m=+3211.568652988
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-974821 describe po busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context embed-certs-974821 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-qwn44:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  2m38s (x2 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-974821 logs busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context embed-certs-974821 logs busybox -n default:
start_stop_delete_test.go:194: wait: integration-test=busybox within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:16.922485679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89edf444d031870b678606c3dab14cec64f5db6770fe8f67ec9b313ab700bd50",
	            "SandboxKey": "/var/run/docker/netns/89edf444d031",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:e2:72:9d:20:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "8c07b01949d42e8f17c50ba6d828c0850ad6e031d8825f2ba64c77c1d4a405fd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25: (1.326704245s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status docker --all                         |                              |         |         |                     |                     |
	|         | --full --no-pager                                     |                              |         |         |                     |                     |
	| delete  | -p bridge-460236                                      | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                  |                              |         |         |                     |                     |
	|         | --no-pager                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                         | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                           |                              |         |         |                     |                     |
	|         | --all --full --no-pager                               |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                              |                              |         |         |                     |                     |
	|         | --no-pager                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                           |                              |         |         |                     |                     |
	|         | --all --full --no-pager                               |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                         |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                              |                              |         |         |                     |                     |
	|         | --no-pager                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                           |                              |         |         |                     |                     |
	|         | --full --no-pager                                     |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                         |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                         |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                           | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                     | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                    | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                          |                              |         |         |                     |                     |
	| start   | -p                                                    | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                          |                              |         |         |                     |                     |
	|         | --memory=2200                                         |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                 |                              |         |         |                     |                     |
	|         | --driver=docker                                       |                              |         |         |                     |                     |
	|         | --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514            | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --alsologtostderr -v=3                                |                              |         |         |                     |                     |
	|---------|-------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:26:18
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:26:18.730820  333931 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:26:18.733545  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.733563  333931 out.go:358] Setting ErrFile to fd 2...
	I0401 20:26:18.733571  333931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:26:18.738068  333931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:26:18.738963  333931 out.go:352] Setting JSON to false
	I0401 20:26:18.740623  333931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4125,"bootTime":1743535054,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:26:18.740803  333931 start.go:139] virtualization: kvm guest
	I0401 20:26:18.742724  333931 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:26:18.744296  333931 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:26:18.745845  333931 notify.go:220] Checking for updates...
	I0401 20:26:18.747318  333931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:26:18.748893  333931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:18.750366  333931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:26:18.751459  333931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:26:18.752672  333931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:26:18.754306  333931 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754458  333931 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:18.754565  333931 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:18.754701  333931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:26:18.789341  333931 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:26:18.789409  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.881271  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.86763666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.881427  333931 docker.go:318] overlay module found
	I0401 20:26:18.885256  333931 out.go:177] * Using the docker driver based on user configuration
	I0401 20:26:18.886587  333931 start.go:297] selected driver: docker
	I0401 20:26:18.886610  333931 start.go:901] validating driver "docker" against <nil>
	I0401 20:26:18.886630  333931 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:26:18.887954  333931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:26:18.963854  333931 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:90 OomKillDisable:true NGoroutines:99 SystemTime:2025-04-01 20:26:18.950352252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:26:18.964074  333931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 20:26:18.964363  333931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:26:18.968028  333931 out.go:177] * Using Docker driver with root privileges
	I0401 20:26:18.970719  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.970819  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.970829  333931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:26:18.970901  333931 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:18.973096  333931 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:26:18.974471  333931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:26:18.975839  333931 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:26:18.976959  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:18.977004  333931 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:26:18.977013  333931 cache.go:56] Caching tarball of preloaded images
	I0401 20:26:18.977014  333931 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:26:18.977118  333931 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:26:18.977129  333931 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:26:18.977241  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:18.977263  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json: {Name:mk41b8c624bf3b117b50b0e33d2457d4436df42e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:19.026924  333931 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:26:19.026949  333931 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:26:19.026964  333931 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:26:19.026998  333931 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:26:19.027106  333931 start.go:364] duration metric: took 87.785µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:26:19.027138  333931 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:19.027241  333931 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:26:16.763271  330894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-974821:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.939069364s)
	I0401 20:26:16.763308  330894 kic.go:203] duration metric: took 4.939248261s to extract preloaded images to volume ...
	W0401 20:26:16.763457  330894 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:16.763573  330894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:16.847617  330894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-974821 --name embed-certs-974821 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-974821 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-974821 --network embed-certs-974821 --ip 192.168.94.2 --volume embed-certs-974821:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:17.529078  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Running}}
	I0401 20:26:17.555101  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:17.586968  330894 cli_runner.go:164] Run: docker exec embed-certs-974821 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:17.648014  330894 oci.go:144] the created container "embed-certs-974821" has a running status.
	I0401 20:26:17.648051  330894 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa...
	I0401 20:26:18.285330  330894 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:18.311984  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.345653  330894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:18.345686  330894 kic_runner.go:114] Args: [docker exec --privileged embed-certs-974821 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:18.411930  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:18.443321  330894 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:18.443410  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.467216  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.467559  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.467574  330894 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:18.609796  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.609837  330894 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:26:18.609906  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.630114  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.630435  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.630455  330894 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:26:18.800604  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:26:18.800683  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:18.831071  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:18.831374  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:18.831407  330894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:18.987643  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:18.987672  330894 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:18.987707  330894 ubuntu.go:177] setting up certificates
	I0401 20:26:18.987721  330894 provision.go:84] configureAuth start
	I0401 20:26:18.987773  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:19.010995  330894 provision.go:143] copyHostCerts
	I0401 20:26:19.011066  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:19.011080  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:19.011159  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:19.011260  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:19.011270  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:19.011301  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:19.011371  330894 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:19.011378  330894 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:19.011411  330894 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:19.011519  330894 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:26:19.375012  330894 provision.go:177] copyRemoteCerts
	I0401 20:26:19.375087  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:19.375140  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.400831  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:19.503241  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:26:19.531832  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:19.561562  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:19.591125  330894 provision.go:87] duration metric: took 603.38883ms to configureAuth
	I0401 20:26:19.591155  330894 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:19.591379  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:19.591497  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:19.620112  330894 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:19.620321  330894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I0401 20:26:19.620334  330894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:20.028896  330894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:20.028925  330894 machine.go:96] duration metric: took 1.585582101s to provisionDockerMachine
	I0401 20:26:20.028936  330894 client.go:171] duration metric: took 9.097879081s to LocalClient.Create
	I0401 20:26:20.028950  330894 start.go:167] duration metric: took 9.097939352s to libmachine.API.Create "embed-certs-974821"
	I0401 20:26:20.028959  330894 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:26:20.028972  330894 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:20.029037  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:20.029089  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.051160  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.157215  330894 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:20.160770  330894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:20.160808  330894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:20.160818  330894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:20.160825  330894 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:20.160837  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:20.160897  330894 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:20.160997  330894 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:20.161151  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:20.173719  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:20.205924  330894 start.go:296] duration metric: took 176.952692ms for postStartSetup
	I0401 20:26:20.206280  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.233912  330894 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:26:20.234197  330894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:20.234246  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.264690  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.375270  330894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:20.380996  330894 start.go:128] duration metric: took 9.45211333s to createHost
	I0401 20:26:20.381027  330894 start.go:83] releasing machines lock for "embed-certs-974821", held for 9.452287035s
	I0401 20:26:20.381088  330894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:26:20.404010  330894 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:20.404054  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.404141  330894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:20.404219  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:20.436974  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.443906  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:20.643641  330894 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:20.648179  330894 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:18.704089  320217 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0401 20:26:18.704123  320217 cache_images.go:123] Successfully loaded all cached images
	I0401 20:26:18.704128  320217 cache_images.go:92] duration metric: took 17.284939204s to LoadCachedImages
	I0401 20:26:18.704139  320217 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:18.704219  320217 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:18.704276  320217 ssh_runner.go:195] Run: crio config
	I0401 20:26:18.757951  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:18.757967  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:18.757976  320217 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:18.757998  320217 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:18.758098  320217 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:18.758154  320217 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.768955  320217 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0401 20:26:18.769017  320217 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:18.780560  320217 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0401 20:26:18.780618  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet
	I0401 20:26:18.780639  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0401 20:26:18.780759  320217 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm
	I0401 20:26:18.785435  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0401 20:26:18.785465  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0401 20:26:20.056132  320217 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:26:20.071013  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0401 20:26:20.075222  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0401 20:26:20.075249  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0401 20:26:20.353036  320217 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0401 20:26:20.359017  320217 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0401 20:26:20.359060  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0401 20:26:20.620194  320217 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:20.630621  320217 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:26:20.649377  320217 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:20.669072  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:26:20.687859  320217 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:20.692137  320217 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:20.705020  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:20.783000  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:20.797428  320217 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:26:20.797458  320217 certs.go:194] generating shared ca certs ...
	I0401 20:26:20.797479  320217 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:20.797648  320217 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:20.797718  320217 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:20.797732  320217 certs.go:256] generating profile certs ...
	I0401 20:26:20.797824  320217 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:26:20.797841  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt with IP's: []
	I0401 20:26:21.025289  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt ...
	I0401 20:26:21.025326  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.crt: {Name:mke9875eb54d53b0e963b356ad83bcd75e7a7412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025561  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key ...
	I0401 20:26:21.025582  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key: {Name:mk5cf5928a944f1ac50d55701032ad8dae5bfdcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.025703  320217 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:26:21.025727  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0401 20:26:21.703494  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 ...
	I0401 20:26:21.703527  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789: {Name:mkff154c452b8abb791f6205356ff8f00084ac42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703729  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 ...
	I0401 20:26:21.703749  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789: {Name:mk98a1753bc671ea092085863390fd551854922e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.703850  320217 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt
	I0401 20:26:21.703945  320217 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key
	I0401 20:26:21.704021  320217 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:26:21.704043  320217 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt with IP's: []
	I0401 20:26:21.823952  320217 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt ...
	I0401 20:26:21.823994  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt: {Name:mk12ddb26dc8992914033bccb24e739dc4a1ef16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824260  320217 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key ...
	I0401 20:26:21.824291  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key: {Name:mkdb31dfa4b6dd47b5225d572106f6b4e48a1935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:21.824569  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:21.824627  320217 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:21.824643  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:21.824677  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:21.824715  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:21.824748  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:21.824812  320217 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:21.825605  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:21.850775  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:21.877956  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:21.901694  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:21.925814  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:26:21.958552  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:26:21.988393  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:22.012826  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:22.050282  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:22.076704  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:22.099879  320217 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:22.123774  320217 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:22.145012  320217 ssh_runner.go:195] Run: openssl version
	I0401 20:26:22.151397  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:22.162414  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166551  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.166619  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:22.173527  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:22.183936  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:22.194218  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198190  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.198311  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:22.206703  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:22.216650  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:22.227467  320217 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231786  320217 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.231858  320217 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:22.239197  320217 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:22.268104  320217 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:22.275324  320217 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:22.275398  320217 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:22.275510  320217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:22.275581  320217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:22.342807  320217 cri.go:89] found id: ""
	I0401 20:26:22.342887  320217 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:22.352857  320217 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:22.397706  320217 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:22.397797  320217 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:22.406979  320217 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:22.407000  320217 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:22.407039  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:22.416134  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:22.416218  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:22.425226  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:22.434731  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:22.434800  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:22.447967  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.457983  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:22.458075  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:22.469883  320217 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:22.479202  320217 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:22.479268  320217 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:22.488113  320217 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:22.556959  320217 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:22.557052  320217 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:22.577518  320217 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:22.577611  320217 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:22.577671  320217 kubeadm.go:310] OS: Linux
	I0401 20:26:22.577732  320217 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:22.577821  320217 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:22.577891  320217 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:22.577964  320217 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:22.578040  320217 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:22.578124  320217 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:22.578277  320217 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:22.578356  320217 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:22.578457  320217 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:22.633543  320217 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:22.633691  320217 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:22.633859  320217 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:22.672052  320217 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:22.744648  320217 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:22.744803  320217 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:22.744884  320217 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:19.030494  333931 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:26:19.030759  333931 start.go:159] libmachine.API.Create for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:19.030792  333931 client.go:168] LocalClient.Create starting
	I0401 20:26:19.030892  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:26:19.030926  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.030951  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031015  333931 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:26:19.031039  333931 main.go:141] libmachine: Decoding PEM data...
	I0401 20:26:19.031052  333931 main.go:141] libmachine: Parsing certificate...
	I0401 20:26:19.031486  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:26:19.058636  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:26:19.058698  333931 network_create.go:284] running [docker network inspect default-k8s-diff-port-993330] to gather additional debugging logs...
	I0401 20:26:19.058720  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330
	W0401 20:26:19.076276  333931 cli_runner.go:211] docker network inspect default-k8s-diff-port-993330 returned with exit code 1
	I0401 20:26:19.076321  333931 network_create.go:287] error running [docker network inspect default-k8s-diff-port-993330]: docker network inspect default-k8s-diff-port-993330: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-993330 not found
	I0401 20:26:19.076339  333931 network_create.go:289] output of [docker network inspect default-k8s-diff-port-993330]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-993330 not found
	
	** /stderr **
	I0401 20:26:19.076470  333931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:19.100145  333931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:26:19.101014  333931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:26:19.101930  333931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:26:19.102831  333931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:26:19.103655  333931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8fa1190968e9 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:aa:29:6a:ad:93} reservation:<nil>}
	I0401 20:26:19.104914  333931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-7bc427b9d0a7 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:2a:7f:b7:10:d1:64} reservation:<nil>}
	I0401 20:26:19.106178  333931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f86d90}
	I0401 20:26:19.106207  333931 network_create.go:124] attempt to create docker network default-k8s-diff-port-993330 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0401 20:26:19.106258  333931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 default-k8s-diff-port-993330
	I0401 20:26:19.172538  333931 network_create.go:108] docker network default-k8s-diff-port-993330 192.168.103.0/24 created
	I0401 20:26:19.172574  333931 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-993330" container
	I0401 20:26:19.172642  333931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:26:19.192037  333931 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-993330 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:26:19.213490  333931 oci.go:103] Successfully created a docker volume default-k8s-diff-port-993330
	I0401 20:26:19.213570  333931 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-993330-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --entrypoint /usr/bin/test -v default-k8s-diff-port-993330:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:26:20.063796  333931 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-993330
	I0401 20:26:20.063838  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:20.063873  333931 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:26:20.063966  333931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:26:20.798923  330894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:20.804592  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.825829  330894 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:20.825910  330894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:20.857889  330894 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:20.857914  330894 start.go:495] detecting cgroup driver to use...
	I0401 20:26:20.857950  330894 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:20.857999  330894 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:20.876027  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:20.886840  330894 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:20.886894  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:20.899593  330894 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:20.913852  330894 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:20.999530  330894 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:21.105398  330894 docker.go:233] disabling docker service ...
	I0401 20:26:21.105462  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:21.128681  330894 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:21.143119  330894 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:21.239431  330894 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:21.347556  330894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:21.362149  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:21.378024  330894 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:21.378091  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.387719  330894 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:21.387780  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.397252  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.407209  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.416854  330894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:21.425951  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.435894  330894 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.451330  330894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:21.460997  330894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:21.469673  330894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:21.478054  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:21.575835  330894 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:24.329419  330894 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.753533672s)
	I0401 20:26:24.329455  330894 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:24.329517  330894 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:24.334301  330894 start.go:563] Will wait 60s for crictl version
	I0401 20:26:24.334347  330894 ssh_runner.go:195] Run: which crictl
	I0401 20:26:24.338065  330894 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:24.393080  330894 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:24.393163  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.436816  330894 ssh_runner.go:195] Run: crio --version
	I0401 20:26:24.491421  330894 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:23.013929  320217 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:23.124710  320217 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:23.261834  320217 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:23.421361  320217 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:23.643148  320217 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:23.643311  320217 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:23.896342  320217 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:23.896584  320217 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-671514] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0401 20:26:24.180117  320217 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:24.383338  320217 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:24.608762  320217 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:24.614000  320217 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:24.874525  320217 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:25.114907  320217 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:25.371100  320217 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:25.498988  320217 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:25.684916  320217 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:25.685557  320217 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:25.687998  320217 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:24.492924  330894 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:24.515702  330894 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:24.521193  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.536171  330894 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:24.536328  330894 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:24.536409  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.640432  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.640460  330894 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:24.640514  330894 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:24.685542  330894 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:24.685565  330894 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:24.685574  330894 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:26:24.685668  330894 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:24.685743  330894 ssh_runner.go:195] Run: crio config
	I0401 20:26:24.766212  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:24.766237  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:24.766252  330894 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:24.766284  330894 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:24.766431  330894 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:24.766497  330894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:24.778790  330894 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:24.778851  330894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:24.789824  330894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:26:24.811427  330894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:24.832231  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:26:24.850731  330894 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:24.854382  330894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:24.866403  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:24.972070  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:24.986029  330894 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:26:24.986052  330894 certs.go:194] generating shared ca certs ...
	I0401 20:26:24.986071  330894 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:24.986217  330894 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:24.986270  330894 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:24.986282  330894 certs.go:256] generating profile certs ...
	I0401 20:26:24.986350  330894 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:26:24.986366  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt with IP's: []
	I0401 20:26:25.561289  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt ...
	I0401 20:26:25.561329  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.crt: {Name:mk536b76487556389d29ad8574ff5ad7bbbb92f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561535  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key ...
	I0401 20:26:25.561595  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key: {Name:mk06a6896cbdd8d679b12e456058f02b8f5cecd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.561758  330894 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:26:25.561783  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0401 20:26:25.644415  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e ...
	I0401 20:26:25.644442  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e: {Name:mk34470e247b340bed5a173c03f86a16dc60e78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644616  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e ...
	I0401 20:26:25.644634  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e: {Name:mk4c295a29c57f2c76710e0b9b364042d092e6af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:25.644731  330894 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt
	I0401 20:26:25.644851  330894 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key
	I0401 20:26:25.644945  330894 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:26:25.644968  330894 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt with IP's: []
	I0401 20:26:26.214362  318306 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0401 20:26:26.214472  318306 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.214629  318306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.214721  318306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.214772  318306 kubeadm.go:310] OS: Linux
	I0401 20:26:26.214839  318306 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.214911  318306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.214980  318306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.215050  318306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.215120  318306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.215191  318306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.215257  318306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.215328  318306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.215434  318306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.215559  318306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.215673  318306 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0401 20:26:26.215753  318306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.217135  318306 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.217235  318306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.217313  318306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.217422  318306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:26.217503  318306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:26.217623  318306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:26.217724  318306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:26.217832  318306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:26.218026  318306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218112  318306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:26.218299  318306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-964633] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:26:26.218403  318306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:26.218506  318306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:26.218576  318306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:26.218652  318306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:26.218719  318306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:26.218796  318306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:26.218887  318306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:26.218972  318306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:26.219140  318306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:26.219260  318306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:26.219320  318306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:26.219415  318306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:26.221597  318306 out.go:235]   - Booting up control plane ...
	I0401 20:26:26.221711  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:26.221832  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:26.221920  318306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:26.222041  318306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:26.222287  318306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0401 20:26:26.222368  318306 kubeadm.go:310] [apiclient] All control plane components are healthy after 16.002573 seconds
	I0401 20:26:26.222512  318306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:26.222668  318306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:26.222767  318306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:26.223041  318306 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-964633 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0401 20:26:26.223123  318306 kubeadm.go:310] [bootstrap-token] Using token: fypcag.rftl5mjclps03e3q
	I0401 20:26:26.224467  318306 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:26.224625  318306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:26.224753  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:26.224943  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:26.225135  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:26.225281  318306 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:26.225432  318306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:26.225610  318306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:26.225682  318306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:26.225797  318306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:26.225810  318306 kubeadm.go:310] 
	I0401 20:26:26.225889  318306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:26.225899  318306 kubeadm.go:310] 
	I0401 20:26:26.226006  318306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:26.226017  318306 kubeadm.go:310] 
	I0401 20:26:26.226057  318306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:26.226155  318306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:26.226230  318306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:26.226240  318306 kubeadm.go:310] 
	I0401 20:26:26.226321  318306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:26.226340  318306 kubeadm.go:310] 
	I0401 20:26:26.226412  318306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:26.226428  318306 kubeadm.go:310] 
	I0401 20:26:26.226497  318306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:26.226616  318306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:26.226709  318306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:26.226724  318306 kubeadm.go:310] 
	I0401 20:26:26.226842  318306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:26.226966  318306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:26.226982  318306 kubeadm.go:310] 
	I0401 20:26:26.227118  318306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227294  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:26.227335  318306 kubeadm.go:310]     --control-plane 
	I0401 20:26:26.227345  318306 kubeadm.go:310] 
	I0401 20:26:26.227466  318306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:26.227481  318306 kubeadm.go:310] 
	I0401 20:26:26.227595  318306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fypcag.rftl5mjclps03e3q \
	I0401 20:26:26.227775  318306 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:26.227801  318306 cni.go:84] Creating CNI manager for ""
	I0401 20:26:26.227810  318306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:26.229908  318306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:26.093967  330894 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt ...
	I0401 20:26:26.094055  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt: {Name:mkd7383c98f7836cbb1915ebedd5c06bc1373b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094280  330894 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key ...
	I0401 20:26:26.094332  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key: {Name:mk3bcba75fecb3e0555fc6c711acaf5f2149d6b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:26.094626  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:26.094703  330894 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:26.094726  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:26.094788  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:26.094838  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:26.094891  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:26.094971  330894 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.095809  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:26.118761  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:26.145911  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:26.170945  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:26.193905  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:26:26.219847  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:26.246393  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:26.271327  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:26:26.297378  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:26.323815  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:26.359204  330894 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:26.389791  330894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:26.408612  330894 ssh_runner.go:195] Run: openssl version
	I0401 20:26:26.414310  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:26.423887  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427471  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.427536  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:26.434675  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:26.443767  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:26.453242  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456856  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.456909  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:26.463995  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:26.474412  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:26.484100  330894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487750  330894 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.487806  330894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:26.495937  330894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:26.506268  330894 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:26.510090  330894 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:26.510144  330894 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:26.510251  330894 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:26.510306  330894 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:26.549531  330894 cri.go:89] found id: ""
	I0401 20:26:26.549591  330894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:26.560092  330894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:26.569126  330894 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:26.569202  330894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:26.578798  330894 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:26.578817  330894 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:26.578863  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:26:26.587232  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:26.587280  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:26.595948  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:26:26.604492  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:26.604560  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:26.614446  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.624719  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:26.624783  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:26.635355  330894 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:26:26.647037  330894 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:26.647109  330894 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:26.655651  330894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:26.709584  330894 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:26.709907  330894 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:26.735070  330894 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:26.735157  330894 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:26.735198  330894 kubeadm.go:310] OS: Linux
	I0401 20:26:26.735253  330894 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:26.735307  330894 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:26.735359  330894 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:26.735411  330894 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:26.735468  330894 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:26.735536  330894 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:26.735593  330894 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:26.735669  330894 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:26.735730  330894 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:26.803818  330894 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:26.803970  330894 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:26.804091  330894 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:26.811281  330894 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.231065  318306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:26.234959  318306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0401 20:26:26.234975  318306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:26.252673  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:26.634659  318306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:26.634773  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:26.634829  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-964633 minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=old-k8s-version-964633 minikube.k8s.io/primary=true
	I0401 20:26:26.766148  318306 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:26.766281  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:25.689888  320217 out.go:235]   - Booting up control plane ...
	I0401 20:26:25.690011  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:25.690139  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:25.690951  320217 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:25.702609  320217 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:25.710116  320217 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:25.710231  320217 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:25.811433  320217 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:25.811592  320217 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:26.813131  320217 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001728428s
	I0401 20:26:26.813266  320217 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:24.237649  333931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993330:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.173629863s)
	I0401 20:26:24.237687  333931 kic.go:203] duration metric: took 4.173809832s to extract preloaded images to volume ...
	W0401 20:26:24.237885  333931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:26:24.238031  333931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:26:24.308572  333931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993330 --name default-k8s-diff-port-993330 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993330 --network default-k8s-diff-port-993330 --ip 192.168.103.2 --volume default-k8s-diff-port-993330:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:26:24.677655  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Running}}
	I0401 20:26:24.697969  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:24.727575  333931 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993330 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:26:24.782583  333931 oci.go:144] the created container "default-k8s-diff-port-993330" has a running status.
	I0401 20:26:24.782627  333931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa...
	I0401 20:26:25.212927  333931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:26:25.241317  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.267434  333931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:26:25.267458  333931 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993330 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:26:25.329230  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:25.353890  333931 machine.go:93] provisionDockerMachine start ...
	I0401 20:26:25.353997  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.375999  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.376240  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.376255  333931 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:26:25.513557  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.513586  333931 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:26:25.513655  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.540806  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.541102  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.541127  333931 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:26:25.698212  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:26:25.698298  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:25.720353  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:25.720578  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:25.720601  333931 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:26:25.858508  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:26:25.858541  333931 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:26:25.858600  333931 ubuntu.go:177] setting up certificates
	I0401 20:26:25.858616  333931 provision.go:84] configureAuth start
	I0401 20:26:25.858676  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:25.884955  333931 provision.go:143] copyHostCerts
	I0401 20:26:25.885010  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:26:25.885017  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:26:25.885078  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:26:25.885156  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:26:25.885160  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:26:25.885189  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:26:25.885238  333931 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:26:25.885242  333931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:26:25.885264  333931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:26:25.885307  333931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:26:26.231155  333931 provision.go:177] copyRemoteCerts
	I0401 20:26:26.231203  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:26:26.231240  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.253691  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.355444  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:26:26.387181  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:26:26.412042  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:26:26.437283  333931 provision.go:87] duration metric: took 578.65574ms to configureAuth
	I0401 20:26:26.437311  333931 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:26:26.437495  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:26.437593  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.458786  333931 main.go:141] libmachine: Using SSH client type: native
	I0401 20:26:26.459087  333931 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I0401 20:26:26.459115  333931 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:26:26.705379  333931 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:26:26.705407  333931 machine.go:96] duration metric: took 1.351492058s to provisionDockerMachine
	I0401 20:26:26.705418  333931 client.go:171] duration metric: took 7.674616564s to LocalClient.Create
	I0401 20:26:26.705435  333931 start.go:167] duration metric: took 7.674676457s to libmachine.API.Create "default-k8s-diff-port-993330"
	I0401 20:26:26.705445  333931 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:26:26.705458  333931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:26:26.705523  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:26:26.705571  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.729203  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:26.828975  333931 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:26:26.833808  333931 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:26:26.833879  333931 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:26:26.833894  333931 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:26:26.833902  333931 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:26:26.833920  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:26:26.833982  333931 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:26:26.834088  333931 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:26:26.834227  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:26:26.847553  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:26.882157  333931 start.go:296] duration metric: took 176.700033ms for postStartSetup
	I0401 20:26:26.882438  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:26.907978  333931 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:26:26.908226  333931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:26:26.908265  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:26.931569  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.031621  333931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:26:27.037649  333931 start.go:128] duration metric: took 8.010390339s to createHost
	I0401 20:26:27.037674  333931 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 8.010551296s
	I0401 20:26:27.037773  333931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:26:27.063446  333931 ssh_runner.go:195] Run: cat /version.json
	I0401 20:26:27.063461  333931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:26:27.063512  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.063516  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:27.085169  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.085851  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:27.177526  333931 ssh_runner.go:195] Run: systemctl --version
	I0401 20:26:27.254625  333931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:26:27.408621  333931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:26:27.412929  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.435652  333931 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:26:27.435786  333931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:26:27.476503  333931 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:26:27.476525  333931 start.go:495] detecting cgroup driver to use...
	I0401 20:26:27.476553  333931 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:26:27.476590  333931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:26:27.492778  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:26:27.504743  333931 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:26:27.504810  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:26:27.517961  333931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:26:27.540325  333931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:26:27.626850  333931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:26:27.722127  333931 docker.go:233] disabling docker service ...
	I0401 20:26:27.722208  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:26:27.745690  333931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:26:27.766319  333931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:26:27.872763  333931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:26:27.977279  333931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:26:27.988271  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:26:28.004096  333931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:26:28.004153  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.013450  333931 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:26:28.013563  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.029498  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.046442  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.058158  333931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:26:28.068534  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.080526  333931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.095360  333931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:26:28.104061  333931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:26:28.112928  333931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:26:28.122276  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.213597  333931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:26:28.346275  333931 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:26:28.346362  333931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:26:28.354158  333931 start.go:563] Will wait 60s for crictl version
	I0401 20:26:28.354224  333931 ssh_runner.go:195] Run: which crictl
	I0401 20:26:28.359100  333931 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:26:28.396091  333931 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:26:28.396155  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.442519  333931 ssh_runner.go:195] Run: crio --version
	I0401 20:26:28.489089  333931 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:26:28.490297  333931 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:26:28.509926  333931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:26:28.513490  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.526892  333931 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:26:28.527052  333931 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:26:28.527122  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.614091  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.614117  333931 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:26:28.614176  333931 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:26:28.660869  333931 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:26:28.660895  333931 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:26:28.660905  333931 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:26:28.661007  333931 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:26:28.661091  333931 ssh_runner.go:195] Run: crio config
	I0401 20:26:28.708765  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:28.708807  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:28.708857  333931 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:26:28.708894  333931 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:26:28.709044  333931 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:26:28.709114  333931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:26:28.719490  333931 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:26:28.719560  333931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:26:28.729732  333931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:26:28.754183  333931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:26:28.780989  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:26:28.798890  333931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:26:28.802435  333931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:26:28.815031  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:28.910070  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:28.925155  333931 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:26:28.925176  333931 certs.go:194] generating shared ca certs ...
	I0401 20:26:28.925195  333931 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:28.925359  333931 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:26:28.925412  333931 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:26:28.925420  333931 certs.go:256] generating profile certs ...
	I0401 20:26:28.925495  333931 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:26:28.925513  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt with IP's: []
	I0401 20:26:29.281951  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt ...
	I0401 20:26:29.281989  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.crt: {Name:mk6b013708c87e84a520dd06c1ed59d935facbef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282216  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key ...
	I0401 20:26:29.282235  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key: {Name:mk1377b596a46d9d05fab9e2aadea7e4ab7f7f4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.282354  333931 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:26:29.282382  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0401 20:26:29.465070  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 ...
	I0401 20:26:29.465097  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1: {Name:mkea6ce05ac60d3127494f34ad7738f4f7a9cd35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465262  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 ...
	I0401 20:26:29.465275  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1: {Name:mk5a5ce03c2007d1b6b62ccbf68a08ed19a29dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.465348  333931 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt
	I0401 20:26:29.465414  333931 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key
	I0401 20:26:29.465465  333931 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:26:29.465484  333931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt with IP's: []
	I0401 20:26:29.611491  333931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt ...
	I0401 20:26:29.611522  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt: {Name:mk66e03f24770b70caf6b1a40486800503c8b2bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611688  333931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key ...
	I0401 20:26:29.611707  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key: {Name:mkc22fc28da1642635a034d156c68114235b18db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:29.611877  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:26:29.611912  333931 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:26:29.611922  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:26:29.611942  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:26:29.611962  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:26:29.611983  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:26:29.612034  333931 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:26:29.612583  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:26:29.638146  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:26:29.669130  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:26:29.694857  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:26:29.718710  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:26:29.753534  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:26:29.782658  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:26:29.806962  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:26:29.839501  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:26:29.871232  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:26:29.893112  333931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:26:29.914364  333931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:26:29.934661  333931 ssh_runner.go:195] Run: openssl version
	I0401 20:26:29.941216  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:26:29.952171  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956504  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.956566  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:26:29.963803  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:26:29.977730  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:26:29.987911  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991232  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.991300  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:26:29.997632  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:26:30.006149  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:26:30.014612  333931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018527  333931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.018590  333931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:26:30.025087  333931 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:26:30.034266  333931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:26:30.037338  333931 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:26:30.037388  333931 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:26:30.037477  333931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:26:30.037539  333931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:26:30.072855  333931 cri.go:89] found id: ""
	I0401 20:26:30.072920  333931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:26:30.081457  333931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:26:30.089669  333931 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:26:30.089712  333931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:26:30.097449  333931 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:26:30.097463  333931 kubeadm.go:157] found existing configuration files:
	
	I0401 20:26:30.097501  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0401 20:26:30.105087  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:26:30.105130  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:26:30.112747  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0401 20:26:30.120867  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:26:30.120923  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:26:30.128580  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.137287  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:26:30.137341  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:26:30.145231  333931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0401 20:26:30.153534  333931 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:26:30.153588  333931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:26:30.161477  333931 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:26:30.198560  333931 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:26:30.198667  333931 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:26:30.216234  333931 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:26:30.216434  333931 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:26:30.216506  333931 kubeadm.go:310] OS: Linux
	I0401 20:26:30.216598  333931 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:26:30.216690  333931 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:26:30.216799  333931 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:26:30.216889  333931 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:26:30.216959  333931 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:26:30.217064  333931 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:26:30.217146  333931 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:26:30.217232  333931 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:26:30.217308  333931 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:26:30.273810  333931 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:26:30.273932  333931 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:26:30.274042  333931 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:26:30.281527  333931 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:26:26.812879  330894 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:26.812982  330894 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:26.813062  330894 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:26.990038  330894 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:27.075365  330894 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:27.240420  330894 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:27.671842  330894 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:27.950747  330894 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:27.950932  330894 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.122258  330894 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:28.122505  330894 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-974821 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0401 20:26:28.324660  330894 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:28.698594  330894 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:28.980523  330894 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:28.980792  330894 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:29.069840  330894 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:29.152275  330894 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:29.514308  330894 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:29.980640  330894 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:30.605506  330894 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:30.606016  330894 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:30.608326  330894 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:30.610434  330894 out.go:235]   - Booting up control plane ...
	I0401 20:26:30.610589  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:30.610705  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:30.611574  330894 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:30.621508  330894 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:30.627282  330894 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:30.627348  330894 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:31.315349  320217 kubeadm.go:310] [api-check] The API server is healthy after 4.502019518s
	I0401 20:26:31.335358  320217 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:31.346880  320217 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:31.366089  320217 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:31.366379  320217 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-671514 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:31.373667  320217 kubeadm.go:310] [bootstrap-token] Using token: v2u2yj.f0z2c0dsnua55yd0
	I0401 20:26:27.266570  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:27.766918  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.266941  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:28.766395  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.266515  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:29.767351  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.266722  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:30.766361  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.266995  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.766839  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:31.374977  320217 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:31.375115  320217 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:31.379816  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:31.386334  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:31.388802  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:31.391232  320217 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:31.394153  320217 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:31.722786  320217 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:32.174300  320217 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:32.723393  320217 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:32.724543  320217 kubeadm.go:310] 
	I0401 20:26:32.724651  320217 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:32.724664  320217 kubeadm.go:310] 
	I0401 20:26:32.724775  320217 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:32.724787  320217 kubeadm.go:310] 
	I0401 20:26:32.724824  320217 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:32.724911  320217 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:32.724987  320217 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:32.724997  320217 kubeadm.go:310] 
	I0401 20:26:32.725074  320217 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:32.725082  320217 kubeadm.go:310] 
	I0401 20:26:32.725154  320217 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:32.725166  320217 kubeadm.go:310] 
	I0401 20:26:32.725241  320217 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:32.725350  320217 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:32.725455  320217 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:32.725467  320217 kubeadm.go:310] 
	I0401 20:26:32.725587  320217 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:32.725710  320217 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:32.725721  320217 kubeadm.go:310] 
	I0401 20:26:32.725870  320217 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726022  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:32.726056  320217 kubeadm.go:310] 	--control-plane 
	I0401 20:26:32.726067  320217 kubeadm.go:310] 
	I0401 20:26:32.726193  320217 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:32.726204  320217 kubeadm.go:310] 
	I0401 20:26:32.726320  320217 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token v2u2yj.f0z2c0dsnua55yd0 \
	I0401 20:26:32.726469  320217 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:32.729728  320217 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:32.730022  320217 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:32.730191  320217 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:32.730219  320217 cni.go:84] Creating CNI manager for ""
	I0401 20:26:32.730232  320217 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:32.732410  320217 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:32.733706  320217 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:32.738954  320217 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:32.738974  320217 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:30.284751  333931 out.go:235]   - Generating certificates and keys ...
	I0401 20:26:30.284847  333931 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:26:30.284901  333931 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:26:30.404295  333931 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:26:30.590835  333931 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:26:30.690873  333931 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:26:30.799742  333931 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:26:31.033161  333931 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:26:31.033434  333931 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.368534  333931 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:26:31.368741  333931 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993330 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0401 20:26:31.553327  333931 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:26:31.704997  333931 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:26:31.942936  333931 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:26:31.943238  333931 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:26:32.110376  333931 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:26:32.206799  333931 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:26:32.461113  333931 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:26:32.741829  333931 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:26:32.890821  333931 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:26:32.891603  333931 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:26:32.894643  333931 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:26:32.896444  333931 out.go:235]   - Booting up control plane ...
	I0401 20:26:32.896578  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:26:32.896677  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:26:32.897497  333931 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:26:32.907942  333931 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:26:32.914928  333931 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:26:32.915037  333931 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:26:33.016556  333931 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:33.016705  333931 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:30.718671  330894 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:26:30.718822  330894 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:26:31.220016  330894 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.470178ms
	I0401 20:26:31.220166  330894 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:36.222594  330894 kubeadm.go:310] [api-check] The API server is healthy after 5.002496615s
	I0401 20:26:36.235583  330894 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:36.249901  330894 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:36.277246  330894 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:36.277520  330894 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-974821 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:36.286921  330894 kubeadm.go:310] [bootstrap-token] Using token: jv93nh.i3b9z4yv7qswasld
	I0401 20:26:32.267336  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.767370  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.266984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.766978  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.266517  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.766984  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.266596  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.767257  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.266597  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.767309  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:32.763227  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:33.071865  320217 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:33.071993  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.072093  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-671514 minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=no-preload-671514 minikube.k8s.io/primary=true
	I0401 20:26:33.175980  320217 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:33.176076  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:33.677193  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.176502  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:34.676231  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.176527  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:35.676298  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.176529  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:36.677167  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.176802  320217 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.278447  320217 kubeadm.go:1113] duration metric: took 4.206494119s to wait for elevateKubeSystemPrivileges
	I0401 20:26:37.278489  320217 kubeadm.go:394] duration metric: took 15.003095359s to StartCluster
	I0401 20:26:37.278512  320217 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.278583  320217 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:37.279329  320217 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:37.279550  320217 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:37.279680  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:37.279711  320217 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:37.279836  320217 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:26:37.279863  320217 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:26:37.279894  320217 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:37.279899  320217 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:26:37.279902  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.279915  320217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:26:37.280266  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.280505  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.281094  320217 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:37.282386  320217 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:37.302764  320217 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:26:37.302802  320217 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:26:37.303094  320217 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:26:37.304839  320217 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:36.288406  330894 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:36.288562  330894 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:36.295218  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:36.302469  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:36.305295  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:36.309869  330894 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:36.314191  330894 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:36.635951  330894 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:37.059943  330894 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:37.629951  330894 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:37.631276  330894 kubeadm.go:310] 
	I0401 20:26:37.631368  330894 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:37.631398  330894 kubeadm.go:310] 
	I0401 20:26:37.631497  330894 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:37.631505  330894 kubeadm.go:310] 
	I0401 20:26:37.631535  330894 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:37.631609  330894 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:37.631668  330894 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:37.631678  330894 kubeadm.go:310] 
	I0401 20:26:37.631753  330894 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:37.631762  330894 kubeadm.go:310] 
	I0401 20:26:37.631817  330894 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:37.631824  330894 kubeadm.go:310] 
	I0401 20:26:37.631887  330894 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:37.632009  330894 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:37.632130  330894 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:37.632148  330894 kubeadm.go:310] 
	I0401 20:26:37.632267  330894 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:37.632379  330894 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:37.632399  330894 kubeadm.go:310] 
	I0401 20:26:37.632522  330894 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.632661  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:37.632687  330894 kubeadm.go:310] 	--control-plane 
	I0401 20:26:37.632693  330894 kubeadm.go:310] 
	I0401 20:26:37.632803  330894 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:37.632809  330894 kubeadm.go:310] 
	I0401 20:26:37.632932  330894 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jv93nh.i3b9z4yv7qswasld \
	I0401 20:26:37.633069  330894 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:37.636726  330894 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:37.637011  330894 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:37.637144  330894 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:37.637172  330894 cni.go:84] Creating CNI manager for ""
	I0401 20:26:37.637181  330894 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:37.639062  330894 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.306217  320217 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.306234  320217 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:37.306275  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.323290  320217 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.323315  320217 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:37.323369  320217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:26:37.331420  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.345142  320217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:26:37.522615  320217 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:37.540123  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:37.543553  320217 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:37.640023  320217 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:38.172685  320217 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:38.436398  320217 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:26:38.445032  320217 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:34.018093  333931 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001682271s
	I0401 20:26:34.018217  333931 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:26:38.520345  333931 kubeadm.go:310] [api-check] The API server is healthy after 4.502202922s
	I0401 20:26:38.531202  333931 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:26:38.540027  333931 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:26:38.556557  333931 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:26:38.556824  333931 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-993330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:26:38.563300  333931 kubeadm.go:310] [bootstrap-token] Using token: 2lh0m0.lu1o5bo0yjsw64dl
	I0401 20:26:38.564844  333931 out.go:235]   - Configuring RBAC rules ...
	I0401 20:26:38.564988  333931 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:26:38.567957  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:26:38.573118  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:26:38.576607  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:26:38.578930  333931 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:26:38.581375  333931 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:26:38.925681  333931 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:26:39.351078  333931 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:26:39.926955  333931 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:26:39.927840  333931 kubeadm.go:310] 
	I0401 20:26:39.927902  333931 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:26:39.927928  333931 kubeadm.go:310] 
	I0401 20:26:39.928044  333931 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:26:39.928060  333931 kubeadm.go:310] 
	I0401 20:26:39.928086  333931 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:26:39.928167  333931 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:26:39.928278  333931 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:26:39.928289  333931 kubeadm.go:310] 
	I0401 20:26:39.928359  333931 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:26:39.928370  333931 kubeadm.go:310] 
	I0401 20:26:39.928436  333931 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:26:39.928446  333931 kubeadm.go:310] 
	I0401 20:26:39.928526  333931 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:26:39.928612  333931 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:26:39.928705  333931 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:26:39.928715  333931 kubeadm.go:310] 
	I0401 20:26:39.928829  333931 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:26:39.928936  333931 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:26:39.928947  333931 kubeadm.go:310] 
	I0401 20:26:39.929063  333931 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929213  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:26:39.929237  333931 kubeadm.go:310] 	--control-plane 
	I0401 20:26:39.929241  333931 kubeadm.go:310] 
	I0401 20:26:39.929308  333931 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:26:39.929314  333931 kubeadm.go:310] 
	I0401 20:26:39.929387  333931 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 2lh0m0.lu1o5bo0yjsw64dl \
	I0401 20:26:39.929489  333931 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:26:39.931816  333931 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:26:39.932039  333931 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:26:39.932158  333931 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:26:39.932194  333931 cni.go:84] Creating CNI manager for ""
	I0401 20:26:39.932202  333931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:26:39.933739  333931 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:26:37.640277  330894 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:37.645480  330894 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:37.645520  330894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:37.663929  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:38.020915  330894 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:38.021121  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.021228  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-974821 minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=embed-certs-974821 minikube.k8s.io/primary=true
	I0401 20:26:38.194466  330894 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:38.194609  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.694720  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.194956  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.695587  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.195419  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.694763  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.266993  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:37.766426  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.266400  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:38.767030  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.266608  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:39.766436  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.267001  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.767416  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.266944  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.766662  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.195260  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.694911  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.194732  330894 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.294801  330894 kubeadm.go:1113] duration metric: took 4.2737406s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.294833  330894 kubeadm.go:394] duration metric: took 15.78469047s to StartCluster
	I0401 20:26:42.294856  330894 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.294916  330894 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.298069  330894 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.302205  330894 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.302395  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.302735  330894 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:42.302795  330894 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.303010  330894 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:26:42.303039  330894 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:26:42.303016  330894 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:26:42.303098  330894 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:26:42.303134  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.303589  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.303817  330894 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.303923  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.305504  330894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.333501  330894 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:26:42.333545  330894 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:26:42.333933  330894 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:26:42.337940  330894 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:42.266968  318306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.466972  318306 kubeadm.go:1113] duration metric: took 15.832229799s to wait for elevateKubeSystemPrivileges
	I0401 20:26:42.467009  318306 kubeadm.go:394] duration metric: took 37.816397182s to StartCluster
	I0401 20:26:42.467028  318306 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.467098  318306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:42.469304  318306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:42.469558  318306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:42.469667  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:42.469700  318306 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:42.469867  318306 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:26:42.469873  318306 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469881  318306 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:26:42.469894  318306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:26:42.469901  318306 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:26:42.469937  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.470179  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.470479  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.471691  318306 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:42.472775  318306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:42.493228  318306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:38.446284  320217 addons.go:514] duration metric: took 1.166586324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:38.676260  320217 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-671514" context rescaled to 1 replicas
	I0401 20:26:40.439677  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.439724  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:42.339190  330894 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.339210  330894 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.339263  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.363214  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.363722  330894 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.363738  330894 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.363802  330894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:26:42.402844  330894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:26:42.551219  330894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.573705  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.583133  330894 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.654174  330894 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.042754  330894 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.337980  330894 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:26:43.352907  330894 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:42.493646  318306 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:26:42.493679  318306 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:26:42.494020  318306 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:26:42.494633  318306 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.494650  318306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:42.494699  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.515738  318306 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:42.515763  318306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:42.515813  318306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:26:42.516120  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.550355  318306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:26:42.656623  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:42.680516  318306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:42.724595  318306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:42.836425  318306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.519128  318306 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:43.520669  318306 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:26:43.534575  318306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:26:39.934893  333931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:26:39.938758  333931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:26:39.938778  333931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:26:39.958872  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:26:40.172083  333931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:26:40.172177  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.172216  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993330 minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=default-k8s-diff-port-993330 minikube.k8s.io/primary=true
	I0401 20:26:40.270134  333931 ops.go:34] apiserver oom_adj: -16
	I0401 20:26:40.270220  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:40.770479  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.270979  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:41.770866  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.270999  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:42.770351  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.270939  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.771222  333931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:26:43.837350  333931 kubeadm.go:1113] duration metric: took 3.665237931s to wait for elevateKubeSystemPrivileges
	I0401 20:26:43.837382  333931 kubeadm.go:394] duration metric: took 13.799996617s to StartCluster
	I0401 20:26:43.837397  333931 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.837462  333931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:26:43.839431  333931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:26:43.839725  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:26:43.839747  333931 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:26:43.839814  333931 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:26:43.839917  333931 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.839930  333931 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:26:43.839940  333931 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.839971  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.839969  333931 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:26:43.840003  333931 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:26:43.840381  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.840514  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.841476  333931 out.go:177] * Verifying Kubernetes components...
	I0401 20:26:43.842721  333931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:26:43.865449  333931 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:26:43.865485  333931 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:26:43.865882  333931 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:26:43.866716  333931 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:26:43.868101  333931 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:43.868119  333931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:26:43.868177  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.890569  333931 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:43.890597  333931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:26:43.890657  333931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:26:43.898155  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.912202  333931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:26:43.945216  333931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:26:43.970994  333931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:26:44.042282  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:26:44.045601  333931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:26:44.448761  333931 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0401 20:26:44.452898  333931 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:26:44.821825  333931 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0401 20:26:43.354186  330894 addons.go:514] duration metric: took 1.051390383s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:43.547860  330894 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-974821" context rescaled to 1 replicas
	I0401 20:26:45.340753  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:43.535896  318306 addons.go:514] duration metric: took 1.066200808s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:26:44.025251  318306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-964633" context rescaled to 1 replicas
	I0401 20:26:45.524906  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:44.440384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:46.939256  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:44.823053  333931 addons.go:514] duration metric: took 983.234963ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0401 20:26:44.953860  333931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993330" context rescaled to 1 replicas
	I0401 20:26:46.456438  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:48.456551  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:47.342409  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:49.841363  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:48.024193  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:50.524047  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:48.939954  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:51.439185  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:50.956413  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.956547  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:52.341170  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:54.341289  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:52.524370  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:54.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:56.524842  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:53.439869  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.440142  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:55.456231  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:57.456435  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:26:56.341467  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:58.841427  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:26:59.024502  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:01.523890  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:26:57.939586  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.940097  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:02.439242  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:26:59.956123  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:02.455889  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:00.843010  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.341703  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:03.524529  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:06.023956  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:04.439881  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:06.440252  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:04.455966  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:06.957181  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:05.841302  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.341628  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:10.341652  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:08.024174  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:10.024345  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:08.938996  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:10.939970  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:09.456272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:11.956091  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:12.841434  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:14.841660  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:12.524277  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:15.024349  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:13.439697  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:15.939138  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:13.956426  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:16.456496  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:17.341723  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:19.841268  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:17.024507  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:19.525042  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:17.939874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:20.439243  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:22.440378  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:18.955912  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:20.956005  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.956678  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:22.340700  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:24.341052  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:22.023928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.024471  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:26.524299  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:24.939393  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:26.939417  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:25.455481  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:27.455703  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:26.841009  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:29.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:28.524523  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:31.024283  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:28.939450  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:30.939696  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:29.456090  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.955815  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:31.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:34.341539  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:33.524538  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:36.024009  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:32.939747  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:35.439767  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:33.956299  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.456275  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:36.841510  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:39.341347  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:38.024183  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:40.524873  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:37.940003  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:39.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:42.439385  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:38.955607  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:40.956800  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:43.455679  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:41.341555  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.840788  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:43.023891  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:45.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:44.940246  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:46.940455  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:45.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:47.456553  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:45.841064  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.841124  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:50.341001  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:47.024321  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.524407  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:49.439985  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:51.940335  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:49.955951  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:51.956409  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:52.341410  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:54.841093  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:52.023887  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.024576  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:56.024959  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:54.439454  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:56.939508  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:54.456208  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:56.955789  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:27:57.340641  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:59.340854  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:27:58.524756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:01.024138  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:27:58.939647  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:01.439794  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:27:59.456520  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.956243  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:01.341412  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.840829  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:03.524265  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:05.524563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:03.939744  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:06.440045  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:04.456056  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:06.956111  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:05.841482  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.340852  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:10.341317  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:08.024452  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:10.024756  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:08.939549  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:10.939811  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:08.956207  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:11.455839  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:13.456094  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:12.341366  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:14.841183  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:12.025361  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:14.524521  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:16.524987  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:12.939969  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.439776  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:15.456143  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.956747  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:17.341377  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.341483  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:19.023946  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:21.524549  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:17.939662  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:19.939721  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:21.940239  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:20.455830  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:22.456722  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:21.841634  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:24.341452  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:23.524895  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:25.525026  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:24.438964  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:26.439292  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:24.955724  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.956285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:26.840369  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.841243  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:28.024231  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:30.524109  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:28.440189  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:30.939597  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:29.455911  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:31.456314  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:30.841367  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:33.341327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:32.524672  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:34.524774  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:36.524951  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:33.439550  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:35.440245  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:33.955987  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.956227  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:38.456694  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:35.840689  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:37.841065  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.841588  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:39.023986  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:41.524623  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:37.939005  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:39.939536  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:42.439706  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:40.955698  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.956224  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:42.341507  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.841327  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:44.024595  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:46.523928  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:44.940152  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:47.439732  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:45.455937  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.955630  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:47.340938  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:49.841495  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:48.524190  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:50.524340  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:49.938992  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:51.940205  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:49.956277  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.456432  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:52.341370  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:54.341564  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:53.024675  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:55.523833  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:54.439752  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:56.440174  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:54.456580  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.956122  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:28:56.341664  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.841264  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:28:58.024006  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:00.024503  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:28:58.939186  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:00.939375  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:28:58.956316  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.456102  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:01.341241  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:03.341319  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:05.341600  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:02.524673  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:05.024010  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:02.939860  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:05.439453  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:03.956025  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:05.956133  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:08.456171  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:07.841143  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:10.341122  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:07.523719  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:09.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:07.939821  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.438914  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:12.439235  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:10.956001  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.956142  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:12.341661  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:14.841049  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:12.023977  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.024449  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:16.523729  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:14.439825  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:16.939668  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:15.455614  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:17.456241  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:16.841077  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.841131  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:18.524124  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:20.524738  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:19.440109  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:21.940032  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:19.956104  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:22.455902  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:21.341247  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.341368  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:23.023758  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:25.024198  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:23.940105  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:26.439762  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:24.456217  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:26.956261  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:25.841203  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:28.341579  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:27.525032  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:30.023864  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:28.940457  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:31.439874  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:29.456184  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:31.456285  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:30.841364  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:33.340883  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:35.341199  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:32.524925  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:35.024046  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:33.939810  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:36.439359  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:33.956165  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:36.455757  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:38.455847  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:37.341322  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:39.341383  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:37.024167  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:39.524569  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:38.439759  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.939916  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:40.456088  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:42.456200  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:41.840811  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:43.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:42.023653  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:44.024644  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:46.524378  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:43.439783  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:45.940130  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:44.955680  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.956328  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:46.341244  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:48.341270  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:49.023827  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:51.024273  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:48.439324  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:50.439633  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:52.440208  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:49.455631  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:51.455836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:50.841179  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.340781  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:55.341224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:53.524530  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:56.023648  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:54.940220  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:57.439520  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:29:53.955662  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:56.456471  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:58.456544  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:29:57.341258  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:59.840812  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:29:58.024095  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:00.524597  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:29:59.440222  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:01.940070  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:00.955859  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:02.956272  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:01.841344  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:04.341580  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:02.524746  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:05.023985  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:04.439796  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:06.439839  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:05.456215  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:07.456449  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:06.841422  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:09.341295  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:07.026315  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:09.524057  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:08.440063  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:10.939342  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:09.955836  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.956424  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:11.341361  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:13.341635  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:12.024045  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:14.524429  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:16.524494  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:12.939384  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.940258  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:17.439661  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:14.455827  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:16.456323  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:15.841119  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:17.841150  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.841518  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:19.024468  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:21.024745  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:19.439858  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:21.939976  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:18.955508  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:20.956126  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.956183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:22.341249  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:24.341376  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:23.524216  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:26.024624  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:24.439649  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:26.440156  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:25.456302  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:27.456379  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:26.841261  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:29.341505  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:28.524527  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:31.023563  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:28.939308  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:30.939745  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:29.955593  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.955956  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:31.841328  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.841451  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:33.023805  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:35.024667  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:33.439114  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:35.439616  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:37.939989  320217 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:30:38.436499  320217 node_ready.go:38] duration metric: took 4m0.000055311s for node "no-preload-671514" to be "Ready" ...
	I0401 20:30:38.438173  320217 out.go:201] 
	W0401 20:30:38.439456  320217 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:38.439475  320217 out.go:270] * 
	W0401 20:30:38.440324  320217 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:38.441563  320217 out.go:201] 
	I0401 20:30:34.456114  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.456183  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:36.341225  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:38.341405  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:37.523708  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.023581  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:40.841224  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341058  330894 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:30:43.341082  330894 node_ready.go:38] duration metric: took 4m0.003071122s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:30:43.342750  330894 out.go:201] 
	W0401 20:30:43.343924  330894 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.343942  330894 out.go:270] * 
	W0401 20:30:43.344884  330894 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.346230  330894 out.go:201] 
	I0401 20:30:42.023613  318306 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:30:43.523708  318306 node_ready.go:38] duration metric: took 4m0.003003222s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:30:43.525700  318306 out.go:201] 
	W0401 20:30:43.527169  318306 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:43.527189  318306 out.go:270] * 
	W0401 20:30:43.528115  318306 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:43.529169  318306 out.go:201] 
	I0401 20:30:38.956138  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:40.956284  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:43.455702  333931 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:30:44.456485  333931 node_ready.go:38] duration metric: took 4m0.003543817s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:30:44.458297  333931 out.go:201] 
	W0401 20:30:44.459571  333931 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:30:44.459594  333931 out.go:270] * 
	W0401 20:30:44.460727  333931 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:30:44.461950  333931 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:36:04 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:04.949183512Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ee5a7b83-a775-469d-9ba0-fb8e540c3618 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:19.948660431Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d5697e1b-86a1-4e20-b986-d48637f3304c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:19.948898406Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d5697e1b-86a1-4e20-b986-d48637f3304c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:34.948907018Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3a5ddb90-f5cf-4987-9f6f-0eca1f4f2993 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:34.949199750Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3a5ddb90-f5cf-4987-9f6f-0eca1f4f2993 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:48 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:48.948626482Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7d4899c0-2d5d-4a69-b9f0-b1648ddff6b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:48 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:48.949072067Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7d4899c0-2d5d-4a69-b9f0-b1648ddff6b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:00 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:00.949037964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=01343dbb-05a8-4242-9470-ef82048e8077 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:00 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:00.949371971Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=01343dbb-05a8-4242-9470-ef82048e8077 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:15 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:15.948344614Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5899b1a9-97c7-4734-8e35-eeb8de774ebe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:15 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:15.948610574Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5899b1a9-97c7-4734-8e35-eeb8de774ebe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:29 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:29.948459494Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=dfa7a2ee-736c-484b-8193-5c4d8ddda7a5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:29 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:29.948694496Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=dfa7a2ee-736c-484b-8193-5c4d8ddda7a5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:41.948531875Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8a698bba-2b3c-415c-bf08-febc2e361708 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:41.948814994Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8a698bba-2b3c-415c-bf08-febc2e361708 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:54 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:54.949116355Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1881ba71-48a5-4617-86c8-65379bada084 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:54 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:54.949381293Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1881ba71-48a5-4617-86c8-65379bada084 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:05 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:05.948235497Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=84fd2ecd-4c41-4b0b-9a89-4aae186800df name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:05 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:05.948528092Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=84fd2ecd-4c41-4b0b-9a89-4aae186800df name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:18 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:18.948791442Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=157127c3-4c98-4168-acea-66edfdf6c57c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:18 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:18.949069354Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=157127c3-4c98-4168-acea-66edfdf6c57c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:30.948653821Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e07ce2fd-ea53-456d-b422-51e72a725007 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:30.948941384Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e07ce2fd-ea53-456d-b422-51e72a725007 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:41.948601964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a93c6a1e-ea0d-47aa-b32b-b4997e499c7f name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:41.948843792Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a93c6a1e-ea0d-47aa-b32b-b4997e499c7f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dab987ff7f406       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   149ac7d6539bc       kube-proxy-gn6mh
	132535ef7e958       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   4731f2f1d181b       etcd-embed-certs-974821
	74706ee864871       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   d15bcc723fd1f       kube-controller-manager-embed-certs-974821
	820d4cbf19595       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   d173b3672c77c       kube-apiserver-embed-certs-974821
	7eaba18859263       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   789d6e327dc78       kube-scheduler-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 089bcdcc4f154a62af892e7332fe1d3b
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [132535ef7e958754bdbf8341d8f37e53b56cb185ee74f78902764c4aaf5544ae] <==
	{"level":"info","ts":"2025-04-01T20:26:31.923556Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:31.923632Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:32.664082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.665247Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-974821 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:32.665313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665616Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666258Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666367Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:32.667046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667079Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:36:33.363086Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":500}
	{"level":"info","ts":"2025-04-01T20:36:33.367802Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":500,"took":"4.453549ms","hash":3344956742,"current-db-size-bytes":1236992,"current-db-size":"1.2 MB","current-db-size-in-use-bytes":1236992,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-04-01T20:36:33.367855Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3344956742,"revision":500,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:47 up  1:21,  0 users,  load average: 1.05, 0.96, 1.64
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [820d4cbf19595741dcb7bf30a4333deced286f0e097e71b59aafcd4be0161d9d] <==
	I0401 20:26:34.517957       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0401 20:26:34.524149       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0401 20:26:34.531430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:34.533518       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:34.533607       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 20:26:34.542892       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:34.542994       1 policy_source.go:240] refreshing policies
	E0401 20:26:34.587040       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:34.624866       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:34.727984       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:35.340661       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:35.345732       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:35.345773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:35.814161       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:35.858128       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:35.960870       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:35.967529       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0401 20:26:35.968831       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:35.973430       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:36.450795       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:37.040714       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:37.058369       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:37.073730       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:41.052685       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:41.852837       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [74706ee86487117baef163b1da8dc8bd6bd6f7b6d9e5e299c0a2f4e7b089ab0c] <==
	I0401 20:26:41.000264       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:26:41.000582       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:41.000636       1 shared_informer.go:320] Caches are synced for cronjob
	I0401 20:26:41.000694       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:41.001404       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:41.001439       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:41.003713       1 shared_informer.go:320] Caches are synced for service account
	I0401 20:26:41.006902       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-974821" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:41.006931       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007090       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.007092       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:26:41.017896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:41.020103       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:26:41.026422       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.173290       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.358163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:42.119656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.064490878s"
	I0401 20:26:42.130355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.575326ms"
	I0401 20:26:42.130626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="125.57µs"
	I0401 20:26:43.227258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="158.555429ms"
	I0401 20:26:43.243190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.765842ms"
	I0401 20:26:43.246386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.123µs"
	I0401 20:33:35.675965       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:38:41.786831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	
	
	==> kube-proxy [dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a] <==
	I0401 20:26:42.428649       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:42.664637       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:26:42.664720       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:42.864985       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:42.865059       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:42.867616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:42.868124       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:42.868224       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:42.869989       1 config.go:199] "Starting service config controller"
	I0401 20:26:42.870084       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:42.870303       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:42.870892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:42.870787       1 config.go:329] "Starting node config controller"
	I0401 20:26:42.871044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:42.970938       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:42.971176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:42.974999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7eaba18859263cff2209aeee6e1ec276f41b4d381c0ad36d0b34b5698e41351d] <==
	W0401 20:26:34.532713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:34.533076       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532738       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:34.533134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532794       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:34.533157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532850       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:34.533254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532864       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:34.533278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.533021       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:34.533298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.402058       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:35.402103       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:35.536405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:35.536453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.552040       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:35.552168       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.597795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:35.597857       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.624483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.624531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.630009       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.630051       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:38.324795       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:57 embed-certs-974821 kubelet[1655]: E0401 20:37:57.203339    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:02 embed-certs-974821 kubelet[1655]: E0401 20:38:02.204559    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:05 embed-certs-974821 kubelet[1655]: E0401 20:38:05.948844    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.119474    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539887119211331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.119519    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539887119211331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.205363    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:12 embed-certs-974821 kubelet[1655]: E0401 20:38:12.206019    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.120354    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539897120193636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.120391    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539897120193636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.207101    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:18 embed-certs-974821 kubelet[1655]: E0401 20:38:18.949333    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:22 embed-certs-974821 kubelet[1655]: E0401 20:38:22.208619    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.121534    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539907121313580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.121578    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539907121313580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.209465    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:30 embed-certs-974821 kubelet[1655]: E0401 20:38:30.949268    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:32 embed-certs-974821 kubelet[1655]: E0401 20:38:32.210191    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.122844    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539917122657379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.122885    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539917122657379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.211890    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:41 embed-certs-974821 kubelet[1655]: E0401 20:38:41.949186    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:42 embed-certs-974821 kubelet[1655]: E0401 20:38:42.212649    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.123869    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539927123658126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.123915    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539927123658126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.213768    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1 (79.026346ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qwn44:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m40s (x2 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 332784,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:16.922485679Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89edf444d031870b678606c3dab14cec64f5db6770fe8f67ec9b313ab700bd50",
	            "SandboxKey": "/var/run/docker/netns/89edf444d031",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:e2:72:9d:20:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "8c07b01949d42e8f17c50ba6d828c0850ad6e031d8825f2ba64c77c1d4a405fd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25: (1.055283212s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:46.936490  347136 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:46.937267  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937279  347136 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:46.937283  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937483  347136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:46.938093  347136 out.go:352] Setting JSON to false
	I0401 20:38:46.939336  347136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4873,"bootTime":1743535054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:46.939416  347136 start.go:139] virtualization: kvm guest
	I0401 20:38:46.941391  347136 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:46.942731  347136 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:46.942777  347136 notify.go:220] Checking for updates...
	I0401 20:38:46.945003  347136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:46.946154  347136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:46.947439  347136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:46.948753  347136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:46.949903  347136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:46.951546  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:46.952045  347136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:46.979943  347136 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:46.980058  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.045628  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.033607616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.045796  347136 docker.go:318] overlay module found
	I0401 20:38:47.048624  347136 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:47.049864  347136 start.go:297] selected driver: docker
	I0401 20:38:47.049880  347136 start.go:901] validating driver "docker" against &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.049961  347136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:47.050761  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.117041  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.106419089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.117471  347136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:47.117515  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:47.117580  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:47.117639  347136 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.120421  347136 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:38:47.121737  347136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:47.123130  347136 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:47.124427  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:47.124518  347136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:47.124567  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124806  347136 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124812  347136 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124821  347136 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124871  347136 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124886  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:38:47.124904  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:38:47.124917  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:38:47.124925  347136 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 58.4µs
	I0401 20:38:47.124937  347136 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:38:47.124920  347136 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 132.796µs
	I0401 20:38:47.124950  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:38:47.124967  347136 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 266.852µs
	I0401 20:38:47.124984  347136 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:38:47.124950  347136 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:38:47.124898  347136 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 93.38µs
	I0401 20:38:47.124997  347136 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:38:47.124908  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:38:47.124924  347136 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125007  347136 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.163µs
	I0401 20:38:47.125016  347136 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:38:47.125051  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:38:47.125060  347136 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 139.313µs
	I0401 20:38:47.125072  347136 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:38:47.125103  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:38:47.125122  347136 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 380.281µs
	I0401 20:38:47.125135  347136 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:38:47.125181  347136 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125270  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:38:47.125286  347136 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 161.592µs
	I0401 20:38:47.125299  347136 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:38:47.125308  347136 cache.go:87] Successfully saved all images to host disk.
	I0401 20:38:47.151197  347136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:47.151225  347136 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:47.151245  347136 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:47.151281  347136 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.151359  347136 start.go:364] duration metric: took 54.86µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:38:47.151382  347136 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:47.151393  347136 fix.go:54] fixHost starting: 
	I0401 20:38:47.151728  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.176435  347136 fix.go:112] recreateIfNeeded on no-preload-671514: state=Stopped err=<nil>
	W0401 20:38:47.176470  347136 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:47.178562  347136 out.go:177] * Restarting existing docker container for "no-preload-671514" ...
	
	
	==> CRI-O <==
	Apr 01 20:36:04 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:04.949183512Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ee5a7b83-a775-469d-9ba0-fb8e540c3618 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:19.948660431Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d5697e1b-86a1-4e20-b986-d48637f3304c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:19.948898406Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d5697e1b-86a1-4e20-b986-d48637f3304c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:34.948907018Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3a5ddb90-f5cf-4987-9f6f-0eca1f4f2993 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:34 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:34.949199750Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3a5ddb90-f5cf-4987-9f6f-0eca1f4f2993 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:48 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:48.948626482Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7d4899c0-2d5d-4a69-b9f0-b1648ddff6b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:48 embed-certs-974821 crio[1032]: time="2025-04-01 20:36:48.949072067Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7d4899c0-2d5d-4a69-b9f0-b1648ddff6b4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:00 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:00.949037964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=01343dbb-05a8-4242-9470-ef82048e8077 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:00 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:00.949371971Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=01343dbb-05a8-4242-9470-ef82048e8077 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:15 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:15.948344614Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5899b1a9-97c7-4734-8e35-eeb8de774ebe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:15 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:15.948610574Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5899b1a9-97c7-4734-8e35-eeb8de774ebe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:29 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:29.948459494Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=dfa7a2ee-736c-484b-8193-5c4d8ddda7a5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:29 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:29.948694496Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=dfa7a2ee-736c-484b-8193-5c4d8ddda7a5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:41.948531875Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8a698bba-2b3c-415c-bf08-febc2e361708 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:41.948814994Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8a698bba-2b3c-415c-bf08-febc2e361708 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:54 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:54.949116355Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1881ba71-48a5-4617-86c8-65379bada084 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:54 embed-certs-974821 crio[1032]: time="2025-04-01 20:37:54.949381293Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1881ba71-48a5-4617-86c8-65379bada084 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:05 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:05.948235497Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=84fd2ecd-4c41-4b0b-9a89-4aae186800df name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:05 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:05.948528092Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=84fd2ecd-4c41-4b0b-9a89-4aae186800df name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:18 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:18.948791442Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=157127c3-4c98-4168-acea-66edfdf6c57c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:18 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:18.949069354Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=157127c3-4c98-4168-acea-66edfdf6c57c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:30.948653821Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e07ce2fd-ea53-456d-b422-51e72a725007 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:30 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:30.948941384Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e07ce2fd-ea53-456d-b422-51e72a725007 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:41.948601964Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a93c6a1e-ea0d-47aa-b32b-b4997e499c7f name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:41 embed-certs-974821 crio[1032]: time="2025-04-01 20:38:41.948843792Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a93c6a1e-ea0d-47aa-b32b-b4997e499c7f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dab987ff7f406       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   149ac7d6539bc       kube-proxy-gn6mh
	132535ef7e958       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   4731f2f1d181b       etcd-embed-certs-974821
	74706ee864871       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   d15bcc723fd1f       kube-controller-manager-embed-certs-974821
	820d4cbf19595       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   d173b3672c77c       kube-apiserver-embed-certs-974821
	7eaba18859263       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   789d6e327dc78       kube-scheduler-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:38:41 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 089bcdcc4f154a62af892e7332fe1d3b
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [132535ef7e958754bdbf8341d8f37e53b56cb185ee74f78902764c4aaf5544ae] <==
	{"level":"info","ts":"2025-04-01T20:26:31.923556Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:31.923632Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:32.664082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:32.664192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.664214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:32.665247Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-974821 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:32.665313Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:32.665616Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666258Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666367Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:32.666534Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:32.666955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:32.667046Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667079Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:32.667518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:36:33.363086Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":500}
	{"level":"info","ts":"2025-04-01T20:36:33.367802Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":500,"took":"4.453549ms","hash":3344956742,"current-db-size-bytes":1236992,"current-db-size":"1.2 MB","current-db-size-in-use-bytes":1236992,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-04-01T20:36:33.367855Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3344956742,"revision":500,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:49 up  1:21,  0 users,  load average: 1.29, 1.01, 1.66
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [820d4cbf19595741dcb7bf30a4333deced286f0e097e71b59aafcd4be0161d9d] <==
	I0401 20:26:34.517957       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0401 20:26:34.524149       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0401 20:26:34.531430       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0401 20:26:34.533518       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:34.533607       1 shared_informer.go:320] Caches are synced for configmaps
	I0401 20:26:34.542892       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0401 20:26:34.542994       1 policy_source.go:240] refreshing policies
	E0401 20:26:34.587040       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:34.624866       1 controller.go:615] quota admission added evaluator for: namespaces
	I0401 20:26:34.727984       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:35.340661       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:35.345732       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:35.345773       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:35.814161       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:35.858128       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:35.960870       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:35.967529       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0401 20:26:35.968831       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:35.973430       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:36.450795       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:37.040714       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:37.058369       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:37.073730       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:41.052685       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:41.852837       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [74706ee86487117baef163b1da8dc8bd6bd6f7b6d9e5e299c0a2f4e7b089ab0c] <==
	I0401 20:26:41.000264       1 shared_informer.go:320] Caches are synced for PV protection
	I0401 20:26:41.000582       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:41.000636       1 shared_informer.go:320] Caches are synced for cronjob
	I0401 20:26:41.000694       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:41.001404       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:41.001439       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:41.003713       1 shared_informer.go:320] Caches are synced for service account
	I0401 20:26:41.006902       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-974821" podCIDRs=["10.244.0.0/24"]
	I0401 20:26:41.006931       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.007090       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.007092       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:26:41.017896       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:41.020103       1 shared_informer.go:320] Caches are synced for disruption
	I0401 20:26:41.026422       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:41.173290       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:41.358163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:26:42.119656       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.064490878s"
	I0401 20:26:42.130355       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.575326ms"
	I0401 20:26:42.130626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="125.57µs"
	I0401 20:26:43.227258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="158.555429ms"
	I0401 20:26:43.243190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="10.765842ms"
	I0401 20:26:43.246386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="96.123µs"
	I0401 20:33:35.675965       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	I0401 20:38:41.786831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	
	
	==> kube-proxy [dab987ff7f4062c94f23af4dec62a3f54bd4527aded9e133555c0303796e167a] <==
	I0401 20:26:42.428649       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:42.664637       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:26:42.664720       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:42.864985       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:42.865059       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:42.867616       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:42.868124       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:42.868224       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:42.869989       1 config.go:199] "Starting service config controller"
	I0401 20:26:42.870084       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:42.870303       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:42.870892       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:42.870787       1 config.go:329] "Starting node config controller"
	I0401 20:26:42.871044       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:42.970938       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:26:42.971176       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:42.974999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7eaba18859263cff2209aeee6e1ec276f41b4d381c0ad36d0b34b5698e41351d] <==
	W0401 20:26:34.532713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:34.533076       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532738       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:34.533134       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532794       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:34.533157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532850       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:34.533254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.532864       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:34.533278       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:34.533021       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:34.533298       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.402058       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:35.402103       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:35.536405       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:35.536453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.552040       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:35.552168       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.597795       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:35.597857       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.624483       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.624531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:35.630009       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:35.630051       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:38.324795       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:57 embed-certs-974821 kubelet[1655]: E0401 20:37:57.203339    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:02 embed-certs-974821 kubelet[1655]: E0401 20:38:02.204559    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:05 embed-certs-974821 kubelet[1655]: E0401 20:38:05.948844    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.119474    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539887119211331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.119519    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539887119211331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:07 embed-certs-974821 kubelet[1655]: E0401 20:38:07.205363    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:12 embed-certs-974821 kubelet[1655]: E0401 20:38:12.206019    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.120354    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539897120193636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.120391    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539897120193636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:17 embed-certs-974821 kubelet[1655]: E0401 20:38:17.207101    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:18 embed-certs-974821 kubelet[1655]: E0401 20:38:18.949333    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:22 embed-certs-974821 kubelet[1655]: E0401 20:38:22.208619    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.121534    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539907121313580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.121578    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539907121313580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:27 embed-certs-974821 kubelet[1655]: E0401 20:38:27.209465    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:30 embed-certs-974821 kubelet[1655]: E0401 20:38:30.949268    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:32 embed-certs-974821 kubelet[1655]: E0401 20:38:32.210191    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.122844    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539917122657379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.122885    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539917122657379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:37 embed-certs-974821 kubelet[1655]: E0401 20:38:37.211890    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:41 embed-certs-974821 kubelet[1655]: E0401 20:38:41.949186    1655 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:38:42 embed-certs-974821 kubelet[1655]: E0401 20:38:42.212649    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.123869    1655 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539927123658126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.123915    1655 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539927123658126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:47 embed-certs-974821 kubelet[1655]: E0401 20:38:47.213768    1655 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1 (79.982173ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qwn44:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m42s (x2 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (485.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (485.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-964633 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [91682cc4-7a2e-4fa7-ab57-5b2f65a76efb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/DeployApp: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:194: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
start_stop_delete_test.go:194: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
start_stop_delete_test.go:194: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2025-04-01 20:38:46.341842542 +0000 UTC m=+3211.942774502
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-964633 describe po busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context old-k8s-version-964633 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-5nmbk:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-5nmbk
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  20s (x9 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-964633 logs busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context old-k8s-version-964633 logs busybox -n default:
start_stop_delete_test.go:194: wait: integration-test=busybox within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:51.595131743Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f156c5de777c528d6f9375314eb0d4cbc858057b93c8250916b99a0c025d2197",
	            "SandboxKey": "/var/run/docker/netns/f156c5de777c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:e3:3a:a8:12:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "243297cc045b5d60c15285cd09a136adfdf271f0421c51d1725f61e9cf50e39f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.415167384s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-460236                                       | bridge-460236                | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	
	==> CRI-O <==
	Apr 01 20:36:17 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:17.555011273Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3c083e72-d778-4c8b-aa84-9e0597a472d5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:30.554749413Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cbc39080-361f-4e26-8791-75488975b5fb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:30.555012878Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cbc39080-361f-4e26-8791-75488975b5fb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:31 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:31.501216865Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=3000fd33-00a1-4511-9697-7bee9833626b name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:31 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:31.501493427Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3000fd33-00a1-4511-9697-7bee9833626b name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:44 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:44.554644604Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=04d5b0fa-383b-4abb-b82b-fdb90b2f51d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:44 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:44.554892905Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=04d5b0fa-383b-4abb-b82b-fdb90b2f51d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:57 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:57.554721559Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=65891265-fe08-4a2f-8447-0556c4c4d554 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:57 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:57.555007934Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=65891265-fe08-4a2f-8447-0556c4c4d554 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:12 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:12.554607450Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7c10185d-eff2-4f3e-bab6-f3be1749f36c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:12 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:12.554904152Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7c10185d-eff2-4f3e-bab6-f3be1749f36c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:23 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:23.554640894Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=13aadc71-7515-4e0e-8a7b-872c352687ab name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:23 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:23.554924156Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=13aadc71-7515-4e0e-8a7b-872c352687ab name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:35 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:35.554602222Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f9284b4f-eeec-4c8e-b428-04f8d4f2f140 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:35 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:35.554899402Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=f9284b4f-eeec-4c8e-b428-04f8d4f2f140 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:49 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:49.554708862Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=deb55827-ca91-49a4-bc3f-d29c6c183ec1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:49 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:49.555018170Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=deb55827-ca91-49a4-bc3f-d29c6c183ec1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:04 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:04.554626944Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=538d21a0-b08b-4101-9657-cc85f4df1ee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:04 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:04.554915732Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=538d21a0-b08b-4101-9657-cc85f4df1ee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:18 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:18.554724991Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=18280b04-19a1-4fdb-bff1-2be4e7786ff9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:18 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:18.554969043Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=18280b04-19a1-4fdb-bff1-2be4e7786ff9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:30.554748361Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=069f7011-e428-4347-bc8d-64e6a8b4f5be name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:30.555050600Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=069f7011-e428-4347-bc8d-64e6a8b4f5be name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:43.554740527Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=053feadf-749b-40bc-8769-67d36202b7d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:43.554995873Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=053feadf-749b-40bc-8769-67d36202b7d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b18de8419e15       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   12 minutes ago      Running             kube-proxy                0                   45b225c010954       kube-proxy-vb8ks
	4384af78a1883       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   12 minutes ago      Running             kube-controller-manager   0                   7e4cef1969b72       kube-controller-manager-old-k8s-version-964633
	9513e7ad765e4       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   12 minutes ago      Running             etcd                      0                   aabb404aa7c03       etcd-old-k8s-version-964633
	f2526055eea0e       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   12 minutes ago      Running             kube-scheduler            0                   0a05fd341a521       kube-scheduler-old-k8s-version-964633
	2064fb7c665fb       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   12 minutes ago      Running             kube-apiserver            0                   b311a7ae56993       kube-apiserver-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 de7c8d50f85047d185c1ae1aa27193dd
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [9513e7ad765e4b69c4cbbfbd6cb33f21a3a48b715bdea7a1ff49cc1566bcc760] <==
	2025-04-01 20:35:05.601772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:15.601695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:25.601777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:35.601845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:45.601799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:55.601806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:05.601734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:15.601785 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:17.961904 I | mvcc: store.index: compact 557
	2025-04-01 20:36:17.962724 I | mvcc: finished scheduled compaction at 557 (took 569.383µs)
	2025-04-01 20:36:25.601851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:35.601736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:45.601681 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:55.601695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:05.601735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:15.601734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:25.601765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:35.601743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:45.601799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:55.601791 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:05.601840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:15.601768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:25.601839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:35.601728 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:45.601714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:38:48 up  1:21,  0 users,  load average: 1.05, 0.96, 1.64
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2064fb7c665fb767c07a50e206db452bfd0e93dc10750dd7ecf94bfe4beb0cc4] <==
	I0401 20:33:29.082821       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:34:07.873250       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:34:07.873305       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:34:07.873315       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:34:44.002362       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:34:44.002409       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:34:44.002419       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:35:21.043921       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:35:21.043996       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:35:21.044006       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:36:05.487178       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:36:05.487225       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:36:05.487234       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:36:41.995934       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:36:41.995978       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:36:41.995986       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:37:13.165219       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:37:13.165261       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:37:13.165268       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:37:47.320900       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:37:47.320957       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:37:47.320968       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:38:28.542921       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:38:28.542965       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:38:28.542974       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [4384af78a188378e4c730aadae8ad08f38d60dd777008b0a8138a2838ea2ab7f] <==
	I0401 20:26:42.217841       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0401 20:26:42.217905       1 shared_informer.go:247] Caches are synced for job 
	I0401 20:26:42.218052       1 shared_informer.go:247] Caches are synced for attach detach 
	I0401 20:26:42.218327       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0401 20:26:42.218385       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0401 20:26:42.218730       1 shared_informer.go:247] Caches are synced for deployment 
	I0401 20:26:42.219644       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0401 20:26:42.222868       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0401 20:26:42.228067       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0401 20:26:42.229898       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0401 20:26:42.242716       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8m52n"
	I0401 20:26:42.255473       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5bjk4"
	I0401 20:26:42.271135       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0401 20:26:42.377788       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0401 20:26:42.379364       1 shared_informer.go:247] Caches are synced for stateful set 
	I0401 20:26:42.400582       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vb8ks"
	I0401 20:26:42.400651       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.426096       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.434446       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rmrss"
	I0401 20:26:42.566911       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0401 20:26:42.917995       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:42.918028       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0401 20:26:42.918408       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:43.539217       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0401 20:26:43.546242       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-8m52n"
	
	
	==> kube-proxy [7b18de8419e1524ddac8727fd7e9261582448e897f548b26ad3311e27cf0e6fb] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [f2526055eea0e40e9b5009904a748c68af694b09fbeb58de9177b4b5f55ffcea] <==
	E0401 20:26:22.050850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:22.050959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:22.051031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:22.051104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051131       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:22.051235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:22.051280       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:22.051338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:22.051403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:37:21 old-k8s-version-964633 kubelet[2076]: E0401 20:37:21.739580    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:23 old-k8s-version-964633 kubelet[2076]: E0401 20:37:23.555121    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:26 old-k8s-version-964633 kubelet[2076]: E0401 20:37:26.740461    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:31 old-k8s-version-964633 kubelet[2076]: E0401 20:37:31.741187    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:35 old-k8s-version-964633 kubelet[2076]: E0401 20:37:35.555125    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:36 old-k8s-version-964633 kubelet[2076]: E0401 20:37:36.741962    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:41 old-k8s-version-964633 kubelet[2076]: E0401 20:37:41.742778    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:46 old-k8s-version-964633 kubelet[2076]: E0401 20:37:46.743448    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:49 old-k8s-version-964633 kubelet[2076]: E0401 20:37:49.555291    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:51 old-k8s-version-964633 kubelet[2076]: E0401 20:37:51.744207    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:56 old-k8s-version-964633 kubelet[2076]: E0401 20:37:56.745012    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:01 old-k8s-version-964633 kubelet[2076]: E0401 20:38:01.745816    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:04 old-k8s-version-964633 kubelet[2076]: E0401 20:38:04.555226    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:06 old-k8s-version-964633 kubelet[2076]: E0401 20:38:06.746524    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:11 old-k8s-version-964633 kubelet[2076]: E0401 20:38:11.747249    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:16 old-k8s-version-964633 kubelet[2076]: E0401 20:38:16.747991    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:18 old-k8s-version-964633 kubelet[2076]: E0401 20:38:18.555284    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:21 old-k8s-version-964633 kubelet[2076]: E0401 20:38:21.748712    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:26 old-k8s-version-964633 kubelet[2076]: E0401 20:38:26.749452    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:30 old-k8s-version-964633 kubelet[2076]: E0401 20:38:30.555339    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:31 old-k8s-version-964633 kubelet[2076]: E0401 20:38:31.750156    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:36 old-k8s-version-964633 kubelet[2076]: E0401 20:38:36.750832    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:41 old-k8s-version-964633 kubelet[2076]: E0401 20:38:41.751609    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:43 old-k8s-version-964633 kubelet[2076]: E0401 20:38:43.555186    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:46 old-k8s-version-964633 kubelet[2076]: E0401 20:38:46.752446    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1 (73.658962ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-5nmbk:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-5nmbk
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  22s (x9 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 319295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:25:51.595131743Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f156c5de777c528d6f9375314eb0d4cbc858057b93c8250916b99a0c025d2197",
	            "SandboxKey": "/var/run/docker/netns/f156c5de777c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:e3:3a:a8:12:66",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "243297cc045b5d60c15285cd09a136adfdf271f0421c51d1725f61e9cf50e39f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.094355908s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:46.936490  347136 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:46.937267  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937279  347136 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:46.937283  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937483  347136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:46.938093  347136 out.go:352] Setting JSON to false
	I0401 20:38:46.939336  347136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4873,"bootTime":1743535054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:46.939416  347136 start.go:139] virtualization: kvm guest
	I0401 20:38:46.941391  347136 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:46.942731  347136 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:46.942777  347136 notify.go:220] Checking for updates...
	I0401 20:38:46.945003  347136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:46.946154  347136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:46.947439  347136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:46.948753  347136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:46.949903  347136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:46.951546  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:46.952045  347136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:46.979943  347136 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:46.980058  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.045628  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.033607616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.045796  347136 docker.go:318] overlay module found
	I0401 20:38:47.048624  347136 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:47.049864  347136 start.go:297] selected driver: docker
	I0401 20:38:47.049880  347136 start.go:901] validating driver "docker" against &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.049961  347136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:47.050761  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.117041  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.106419089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.117471  347136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:47.117515  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:47.117580  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:47.117639  347136 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.120421  347136 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:38:47.121737  347136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:47.123130  347136 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:47.124427  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:47.124518  347136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:47.124567  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124806  347136 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124812  347136 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124821  347136 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124871  347136 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124886  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:38:47.124904  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:38:47.124917  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:38:47.124925  347136 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 58.4µs
	I0401 20:38:47.124937  347136 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:38:47.124920  347136 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 132.796µs
	I0401 20:38:47.124950  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:38:47.124967  347136 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 266.852µs
	I0401 20:38:47.124984  347136 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:38:47.124950  347136 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:38:47.124898  347136 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 93.38µs
	I0401 20:38:47.124997  347136 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:38:47.124908  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:38:47.124924  347136 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125007  347136 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.163µs
	I0401 20:38:47.125016  347136 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:38:47.125051  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:38:47.125060  347136 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 139.313µs
	I0401 20:38:47.125072  347136 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:38:47.125103  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:38:47.125122  347136 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 380.281µs
	I0401 20:38:47.125135  347136 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:38:47.125181  347136 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125270  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:38:47.125286  347136 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 161.592µs
	I0401 20:38:47.125299  347136 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:38:47.125308  347136 cache.go:87] Successfully saved all images to host disk.
	I0401 20:38:47.151197  347136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:47.151225  347136 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:47.151245  347136 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:47.151281  347136 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.151359  347136 start.go:364] duration metric: took 54.86µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:38:47.151382  347136 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:47.151393  347136 fix.go:54] fixHost starting: 
	I0401 20:38:47.151728  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.176435  347136 fix.go:112] recreateIfNeeded on no-preload-671514: state=Stopped err=<nil>
	W0401 20:38:47.176470  347136 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:47.178562  347136 out.go:177] * Restarting existing docker container for "no-preload-671514" ...
	
	
	==> CRI-O <==
	Apr 01 20:36:17 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:17.555011273Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3c083e72-d778-4c8b-aa84-9e0597a472d5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:30.554749413Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cbc39080-361f-4e26-8791-75488975b5fb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:30.555012878Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cbc39080-361f-4e26-8791-75488975b5fb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:31 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:31.501216865Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=3000fd33-00a1-4511-9697-7bee9833626b name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:31 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:31.501493427Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=3000fd33-00a1-4511-9697-7bee9833626b name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:44 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:44.554644604Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=04d5b0fa-383b-4abb-b82b-fdb90b2f51d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:44 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:44.554892905Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=04d5b0fa-383b-4abb-b82b-fdb90b2f51d1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:57 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:57.554721559Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=65891265-fe08-4a2f-8447-0556c4c4d554 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:36:57 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:36:57.555007934Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=65891265-fe08-4a2f-8447-0556c4c4d554 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:12 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:12.554607450Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7c10185d-eff2-4f3e-bab6-f3be1749f36c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:12 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:12.554904152Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7c10185d-eff2-4f3e-bab6-f3be1749f36c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:23 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:23.554640894Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=13aadc71-7515-4e0e-8a7b-872c352687ab name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:23 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:23.554924156Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=13aadc71-7515-4e0e-8a7b-872c352687ab name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:35 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:35.554602222Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f9284b4f-eeec-4c8e-b428-04f8d4f2f140 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:35 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:35.554899402Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=f9284b4f-eeec-4c8e-b428-04f8d4f2f140 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:49 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:49.554708862Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=deb55827-ca91-49a4-bc3f-d29c6c183ec1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:37:49 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:37:49.555018170Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=deb55827-ca91-49a4-bc3f-d29c6c183ec1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:04 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:04.554626944Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=538d21a0-b08b-4101-9657-cc85f4df1ee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:04 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:04.554915732Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=538d21a0-b08b-4101-9657-cc85f4df1ee5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:18 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:18.554724991Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=18280b04-19a1-4fdb-bff1-2be4e7786ff9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:18 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:18.554969043Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=18280b04-19a1-4fdb-bff1-2be4e7786ff9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:30.554748361Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=069f7011-e428-4347-bc8d-64e6a8b4f5be name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:30 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:30.555050600Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=069f7011-e428-4347-bc8d-64e6a8b4f5be name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:43.554740527Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=053feadf-749b-40bc-8769-67d36202b7d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:38:43 old-k8s-version-964633 crio[1034]: time="2025-04-01 20:38:43.554995873Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=053feadf-749b-40bc-8769-67d36202b7d7 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b18de8419e15       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   12 minutes ago      Running             kube-proxy                0                   45b225c010954       kube-proxy-vb8ks
	4384af78a1883       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   12 minutes ago      Running             kube-controller-manager   0                   7e4cef1969b72       kube-controller-manager-old-k8s-version-964633
	9513e7ad765e4       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   12 minutes ago      Running             etcd                      0                   aabb404aa7c03       etcd-old-k8s-version-964633
	f2526055eea0e       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   12 minutes ago      Running             kube-scheduler            0                   0a05fd341a521       kube-scheduler-old-k8s-version-964633
	2064fb7c665fb       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   12 minutes ago      Running             kube-apiserver            0                   b311a7ae56993       kube-apiserver-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:36:42 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 de7c8d50f85047d185c1ae1aa27193dd
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [9513e7ad765e4b69c4cbbfbd6cb33f21a3a48b715bdea7a1ff49cc1566bcc760] <==
	2025-04-01 20:35:05.601772 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:15.601695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:25.601777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:35.601845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:45.601799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:35:55.601806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:05.601734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:15.601785 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:17.961904 I | mvcc: store.index: compact 557
	2025-04-01 20:36:17.962724 I | mvcc: finished scheduled compaction at 557 (took 569.383µs)
	2025-04-01 20:36:25.601851 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:35.601736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:45.601681 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:36:55.601695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:05.601735 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:15.601734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:25.601765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:35.601743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:45.601799 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:37:55.601791 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:05.601840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:15.601768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:25.601839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:35.601728 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:38:45.601714 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:38:50 up  1:21,  0 users,  load average: 1.29, 1.01, 1.66
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2064fb7c665fb767c07a50e206db452bfd0e93dc10750dd7ecf94bfe4beb0cc4] <==
	I0401 20:33:29.082821       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:34:07.873250       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:34:07.873305       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:34:07.873315       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:34:44.002362       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:34:44.002409       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:34:44.002419       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:35:21.043921       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:35:21.043996       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:35:21.044006       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:36:05.487178       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:36:05.487225       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:36:05.487234       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:36:41.995934       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:36:41.995978       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:36:41.995986       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:37:13.165219       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:37:13.165261       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:37:13.165268       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:37:47.320900       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:37:47.320957       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:37:47.320968       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:38:28.542921       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:38:28.542965       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:38:28.542974       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [4384af78a188378e4c730aadae8ad08f38d60dd777008b0a8138a2838ea2ab7f] <==
	I0401 20:26:42.217841       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0401 20:26:42.217905       1 shared_informer.go:247] Caches are synced for job 
	I0401 20:26:42.218052       1 shared_informer.go:247] Caches are synced for attach detach 
	I0401 20:26:42.218327       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0401 20:26:42.218385       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0401 20:26:42.218730       1 shared_informer.go:247] Caches are synced for deployment 
	I0401 20:26:42.219644       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0401 20:26:42.222868       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0401 20:26:42.228067       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0401 20:26:42.229898       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0401 20:26:42.242716       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-8m52n"
	I0401 20:26:42.255473       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5bjk4"
	I0401 20:26:42.271135       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0401 20:26:42.377788       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0401 20:26:42.379364       1 shared_informer.go:247] Caches are synced for stateful set 
	I0401 20:26:42.400582       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vb8ks"
	I0401 20:26:42.400651       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.426096       1 shared_informer.go:247] Caches are synced for resource quota 
	I0401 20:26:42.434446       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rmrss"
	I0401 20:26:42.566911       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0401 20:26:42.917995       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:42.918028       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0401 20:26:42.918408       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:26:43.539217       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0401 20:26:43.546242       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-8m52n"
	
	
	==> kube-proxy [7b18de8419e1524ddac8727fd7e9261582448e897f548b26ad3311e27cf0e6fb] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [f2526055eea0e40e9b5009904a748c68af694b09fbeb58de9177b4b5f55ffcea] <==
	E0401 20:26:22.050850       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:22.050959       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:22.051031       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:22.051104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051131       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:22.051219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:22.051235       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:22.051280       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:22.051338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:22.051403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:37:21 old-k8s-version-964633 kubelet[2076]: E0401 20:37:21.739580    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:23 old-k8s-version-964633 kubelet[2076]: E0401 20:37:23.555121    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:26 old-k8s-version-964633 kubelet[2076]: E0401 20:37:26.740461    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:31 old-k8s-version-964633 kubelet[2076]: E0401 20:37:31.741187    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:35 old-k8s-version-964633 kubelet[2076]: E0401 20:37:35.555125    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:36 old-k8s-version-964633 kubelet[2076]: E0401 20:37:36.741962    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:41 old-k8s-version-964633 kubelet[2076]: E0401 20:37:41.742778    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:46 old-k8s-version-964633 kubelet[2076]: E0401 20:37:46.743448    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:49 old-k8s-version-964633 kubelet[2076]: E0401 20:37:49.555291    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:37:51 old-k8s-version-964633 kubelet[2076]: E0401 20:37:51.744207    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:37:56 old-k8s-version-964633 kubelet[2076]: E0401 20:37:56.745012    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:01 old-k8s-version-964633 kubelet[2076]: E0401 20:38:01.745816    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:04 old-k8s-version-964633 kubelet[2076]: E0401 20:38:04.555226    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:06 old-k8s-version-964633 kubelet[2076]: E0401 20:38:06.746524    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:11 old-k8s-version-964633 kubelet[2076]: E0401 20:38:11.747249    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:16 old-k8s-version-964633 kubelet[2076]: E0401 20:38:16.747991    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:18 old-k8s-version-964633 kubelet[2076]: E0401 20:38:18.555284    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:21 old-k8s-version-964633 kubelet[2076]: E0401 20:38:21.748712    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:26 old-k8s-version-964633 kubelet[2076]: E0401 20:38:26.749452    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:30 old-k8s-version-964633 kubelet[2076]: E0401 20:38:30.555339    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:31 old-k8s-version-964633 kubelet[2076]: E0401 20:38:31.750156    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:36 old-k8s-version-964633 kubelet[2076]: E0401 20:38:36.750832    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:41 old-k8s-version-964633 kubelet[2076]: E0401 20:38:41.751609    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:38:43 old-k8s-version-964633 kubelet[2076]: E0401 20:38:43.555186    2076 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:38:46 old-k8s-version-964633 kubelet[2076]: E0401 20:38:46.752446    2076 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1 (91.022135ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-5nmbk:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-5nmbk
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  24s (x9 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (485.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (484.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7dd9c189-4306-415d-a744-19912882b9bf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0401 20:30:47.767101   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:47.963578   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:49.445165   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:51.728913   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:58.008562   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:30:58.205220   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:18.490363   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:18.674895   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:18.687291   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:29.328001   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:50.874505   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:59.452087   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:31:59.649328   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:32:13.650329   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:32:40.596324   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:32:56.322989   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:05.582727   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:21.374179   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:21.570991   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:26.124208   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:33.287513   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:33:45.468020   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:07.013218   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:13.169699   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:29.791777   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:34.716340   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:53.251996   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:56.735696   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:34:57.492464   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:35:24.437693   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:35:37.515029   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:35:37.710604   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:36:05.215554   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:36:05.413163   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:38:05.583438   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:38:26.124164   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/DeployApp: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:194: ***** TestStartStop/group/default-k8s-diff-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
start_stop_delete_test.go:194: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
start_stop_delete_test.go:194: TestStartStop/group/default-k8s-diff-port/serial/DeployApp: showing logs for failed pods as of 2025-04-01 20:38:47.149340331 +0000 UTC m=+3212.750271774
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe po busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context default-k8s-diff-port-993330 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-7wrpd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  2m36s (x2 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 logs busybox -n default
start_stop_delete_test.go:194: (dbg) kubectl --context default-k8s-diff-port-993330 logs busybox -n default:
start_stop_delete_test.go:194: wait: integration-test=busybox within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:24.363626089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e116c8681f9a446b4eb5781093640ab52b0549a1b9c009ec7c6caa169d37f052",
	            "SandboxKey": "/var/run/docker/netns/e116c8681f9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:ed:d0:09:db:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "cfed49f55c5786829041c1b4d8f3804c0fe9eba623f6b8950b4c8d49cc775ef9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.176025205s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:46.936490  347136 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:46.937267  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937279  347136 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:46.937283  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937483  347136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:46.938093  347136 out.go:352] Setting JSON to false
	I0401 20:38:46.939336  347136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4873,"bootTime":1743535054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:46.939416  347136 start.go:139] virtualization: kvm guest
	I0401 20:38:46.941391  347136 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:46.942731  347136 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:46.942777  347136 notify.go:220] Checking for updates...
	I0401 20:38:46.945003  347136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:46.946154  347136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:46.947439  347136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:46.948753  347136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:46.949903  347136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:46.951546  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:46.952045  347136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:46.979943  347136 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:46.980058  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.045628  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.033607616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.045796  347136 docker.go:318] overlay module found
	I0401 20:38:47.048624  347136 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:47.049864  347136 start.go:297] selected driver: docker
	I0401 20:38:47.049880  347136 start.go:901] validating driver "docker" against &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.049961  347136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:47.050761  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.117041  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.106419089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.117471  347136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:47.117515  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:47.117580  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:47.117639  347136 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.120421  347136 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:38:47.121737  347136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:47.123130  347136 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:47.124427  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:47.124518  347136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:47.124567  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124806  347136 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124812  347136 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124821  347136 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124871  347136 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124886  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:38:47.124904  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:38:47.124917  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:38:47.124925  347136 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 58.4µs
	I0401 20:38:47.124937  347136 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:38:47.124920  347136 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 132.796µs
	I0401 20:38:47.124950  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:38:47.124967  347136 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 266.852µs
	I0401 20:38:47.124984  347136 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:38:47.124950  347136 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:38:47.124898  347136 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 93.38µs
	I0401 20:38:47.124997  347136 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:38:47.124908  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:38:47.124924  347136 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125007  347136 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.163µs
	I0401 20:38:47.125016  347136 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:38:47.125051  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:38:47.125060  347136 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 139.313µs
	I0401 20:38:47.125072  347136 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:38:47.125103  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:38:47.125122  347136 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 380.281µs
	I0401 20:38:47.125135  347136 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:38:47.125181  347136 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125270  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:38:47.125286  347136 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 161.592µs
	I0401 20:38:47.125299  347136 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:38:47.125308  347136 cache.go:87] Successfully saved all images to host disk.
	I0401 20:38:47.151197  347136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:47.151225  347136 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:47.151245  347136 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:47.151281  347136 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.151359  347136 start.go:364] duration metric: took 54.86µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:38:47.151382  347136 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:47.151393  347136 fix.go:54] fixHost starting: 
	I0401 20:38:47.151728  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.176435  347136 fix.go:112] recreateIfNeeded on no-preload-671514: state=Stopped err=<nil>
	W0401 20:38:47.176470  347136 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:47.178562  347136 out.go:177] * Restarting existing docker container for "no-preload-671514" ...
	
	
	==> CRI-O <==
	Apr 01 20:36:04 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:04.239296386Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=631736c2-e76b-4f49-b285-7b1e622238f9 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:19.239574662Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=82fe0b5f-4e49-4c87-b8b7-4ac263f66824 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:19.239837513Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=82fe0b5f-4e49-4c87-b8b7-4ac263f66824 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:31 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:31.239777492Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=404e07ee-8f16-4f08-b311-849c7e037704 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:31 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:31.240080546Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=404e07ee-8f16-4f08-b311-849c7e037704 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:44 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:44.239727513Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ed34fdea-a1b0-4f5d-a79b-f575dead7360 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:44 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:44.240020710Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ed34fdea-a1b0-4f5d-a79b-f575dead7360 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:59 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:59.239053025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d11f7b90-4a5a-45c0-970d-5abb02b4f01b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:59 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:59.239349028Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d11f7b90-4a5a-45c0-970d-5abb02b4f01b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:11.238993696Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a8d0d41f-c2f8-421a-ba6c-c58b2fcb27b6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:11.239298214Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a8d0d41f-c2f8-421a-ba6c-c58b2fcb27b6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:26.239631916Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1abc4fa1-4ed2-4fd0-acb7-e440ef805602 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:26.239927093Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1abc4fa1-4ed2-4fd0-acb7-e440ef805602 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:38 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:38.238973826Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2635bb80-0058-45ea-8133-8f603e6c3341 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:38 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:38.239275459Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2635bb80-0058-45ea-8133-8f603e6c3341 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:49 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:49.239060808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6085be2d-56ab-4a0e-bdef-922e1e568883 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:49 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:49.239366260Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6085be2d-56ab-4a0e-bdef-922e1e568883 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:00 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:00.238988877Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=459600eb-a040-4bb6-bef9-35d29b92c390 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:00 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:00.239232247Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=459600eb-a040-4bb6-bef9-35d29b92c390 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:14 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:14.239094030Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=29d04a0b-2aa1-4392-9794-18a3db9b42da name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:14 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:14.239362592Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=29d04a0b-2aa1-4392-9794-18a3db9b42da name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:25 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:25.239560484Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a216c03f-7361-4626-8347-ae0ebe94c461 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:25 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:25.239818970Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a216c03f-7361-4626-8347-ae0ebe94c461 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:39 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:39.239400963Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=35102049-1ee0-4e7a-b318-56e8fd2436c4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:39 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:39.239698339Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=35102049-1ee0-4e7a-b318-56e8fd2436c4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	901ead14674ca       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   afd16935a506b       kube-proxy-btnmc
	0582ac1eac9e7       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   50a8fff230f0e       kube-controller-manager-default-k8s-diff-port-993330
	38f17c6d6c18d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   9bfb2a6c26975       kube-apiserver-default-k8s-diff-port-993330
	21b9dbd8d6257       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   f74b59a5b87b8       kube-scheduler-default-k8s-diff-port-993330
	265bcef800f65       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   d24837c573a23       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f9efd91622a43ff8c62538d2a5dee6c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [265bcef800f65f87a982f41760a50d05b8b471734d0c9eb3c0aedfa4ea71219e] <==
	{"level":"info","ts":"2025-04-01T20:26:34.535313Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:34.535354Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:35.074806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.075737Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076316Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:35.076337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076690Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076729Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.077119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:35.118248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:36:35.484768Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":537}
	{"level":"info","ts":"2025-04-01T20:36:35.489426Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":537,"took":"4.372726ms","hash":3434463065,"current-db-size-bytes":1343488,"current-db-size":"1.3 MB","current-db-size-in-use-bytes":1343488,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-04-01T20:36:35.489478Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3434463065,"revision":537,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:48 up  1:21,  0 users,  load average: 1.05, 0.96, 1.64
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [38f17c6d6c18db0d9f10a0d87db28e50ce8bb1d3e5d521a5fb71b3b079328b39] <==
	I0401 20:26:36.918604       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:26:36.919358       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 20:26:36.919648       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:36.919688       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:26:36.919698       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:26:36.919706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:36.919713       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:36.922458       1 controller.go:615] quota admission added evaluator for: namespaces
	E0401 20:26:36.926891       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:36.977970       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:37.801366       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:37.809683       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:37.809825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:38.352965       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:38.395811       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:38.529158       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:38.534999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0401 20:26:38.536167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:38.541299       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:38.843867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:39.334365       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:39.350211       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:39.357875       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:44.028415       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:44.426310       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0582ac1eac9e7fe6cc9ae5fe1a2fdbca64dc6f2415721e0e6f9cd8e075c2f7ac] <==
	I0401 20:26:43.392867       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0401 20:26:43.392908       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:43.392986       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:26:43.393143       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:26:43.393352       1 shared_informer.go:320] Caches are synced for crt configmap
	I0401 20:26:43.393515       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0401 20:26:43.393537       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:26:43.393631       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0401 20:26:43.393845       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:26:43.393921       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:43.394241       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:43.394351       1 shared_informer.go:320] Caches are synced for TTL
	I0401 20:26:43.395317       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:43.395549       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:43.397813       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.398860       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.414109       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:44.334167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	I0401 20:26:44.628314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="595.822096ms"
	I0401 20:26:44.640479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.099169ms"
	I0401 20:26:44.645556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="5.030169ms"
	I0401 20:26:44.656890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.205665ms"
	I0401 20:26:44.656986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.73µs"
	I0401 20:31:15.084162       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	I0401 20:36:20.634301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	
	
	==> kube-proxy [901ead14674ca902c80ccfab27785fd598218cda7bce2cad3a9ca70939f51f28] <==
	I0401 20:26:44.894601       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:44.998268       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:26:44.998336       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:45.018925       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:45.019003       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:45.021196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:45.021635       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:45.021671       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:45.023446       1 config.go:329] "Starting node config controller"
	I0401 20:26:45.023539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:45.023422       1 config.go:199] "Starting service config controller"
	I0401 20:26:45.023632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:45.023440       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:45.023689       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:45.124011       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:45.124014       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:45.124012       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [21b9dbd8d62576a9c01ee56d38988a8024ae1ee6a6c4d006a881f902776b6225] <==
	W0401 20:26:37.744064       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:37.744119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.795743       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:37.795891       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:37.853181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:37.853456       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.899070       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:37.899125       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.935695       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:37.935832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.974076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:37.974251       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.986936       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:37.986983       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.999704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:37.999872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.064871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.064927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.073640       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:38.073691       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.139325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.139494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.164184       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:38.164333       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:39.623914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:54 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:37:54.479920    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:37:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:37:59.349404    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539879349151693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:37:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:37:59.349442    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539879349151693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:37:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:37:59.480806    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:00 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:00.239558    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:04 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:04.482342    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.350378    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539889350180747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.350422    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539889350180747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.483905    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:14 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:14.239677    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:14 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:14.484881    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.351343    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539899351177296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.351375    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539899351177296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.485322    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:24 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:24.486590    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:25 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:25.240068    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.352329    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539909352125299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.352367    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539909352125299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.487443    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:34 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:34.488341    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.239969    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.353330    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539919353138268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.353371    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539919353138268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.491956    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:44 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:44.493155    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1 (77.569096ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7wrpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m38s (x2 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335242,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:26:24.363626089Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e116c8681f9a446b4eb5781093640ab52b0549a1b9c009ec7c6caa169d37f052",
	            "SandboxKey": "/var/run/docker/netns/e116c8681f9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:ed:d0:09:db:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "cfed49f55c5786829041c1b4d8f3804c0fe9eba623f6b8950b4c8d49cc775ef9",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.079935729s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat docker                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/docker/daemon.json                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo docker                          | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | system info                                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status cri-docker                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat cri-docker                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | cri-dockerd --version                                  |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | systemctl status containerd                            |                              |         |         |                     |                     |
	|         | --all --full --no-pager                                |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat containerd                               |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:46.936490  347136 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:46.937267  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937279  347136 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:46.937283  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937483  347136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:46.938093  347136 out.go:352] Setting JSON to false
	I0401 20:38:46.939336  347136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4873,"bootTime":1743535054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:46.939416  347136 start.go:139] virtualization: kvm guest
	I0401 20:38:46.941391  347136 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:46.942731  347136 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:46.942777  347136 notify.go:220] Checking for updates...
	I0401 20:38:46.945003  347136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:46.946154  347136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:46.947439  347136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:46.948753  347136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:46.949903  347136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:46.951546  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:46.952045  347136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:46.979943  347136 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:46.980058  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.045628  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.033607616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.045796  347136 docker.go:318] overlay module found
	I0401 20:38:47.048624  347136 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:47.049864  347136 start.go:297] selected driver: docker
	I0401 20:38:47.049880  347136 start.go:901] validating driver "docker" against &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.049961  347136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:47.050761  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.117041  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.106419089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.117471  347136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:47.117515  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:47.117580  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:47.117639  347136 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.120421  347136 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:38:47.121737  347136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:47.123130  347136 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:47.124427  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:47.124518  347136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:47.124567  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124806  347136 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124812  347136 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124821  347136 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124871  347136 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124886  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:38:47.124904  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:38:47.124917  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:38:47.124925  347136 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 58.4µs
	I0401 20:38:47.124937  347136 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:38:47.124920  347136 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 132.796µs
	I0401 20:38:47.124950  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:38:47.124967  347136 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 266.852µs
	I0401 20:38:47.124984  347136 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:38:47.124950  347136 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:38:47.124898  347136 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 93.38µs
	I0401 20:38:47.124997  347136 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:38:47.124908  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:38:47.124924  347136 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125007  347136 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.163µs
	I0401 20:38:47.125016  347136 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:38:47.125051  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:38:47.125060  347136 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 139.313µs
	I0401 20:38:47.125072  347136 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:38:47.125103  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:38:47.125122  347136 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 380.281µs
	I0401 20:38:47.125135  347136 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:38:47.125181  347136 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125270  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:38:47.125286  347136 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 161.592µs
	I0401 20:38:47.125299  347136 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:38:47.125308  347136 cache.go:87] Successfully saved all images to host disk.
	I0401 20:38:47.151197  347136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:47.151225  347136 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:47.151245  347136 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:47.151281  347136 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.151359  347136 start.go:364] duration metric: took 54.86µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:38:47.151382  347136 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:47.151393  347136 fix.go:54] fixHost starting: 
	I0401 20:38:47.151728  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.176435  347136 fix.go:112] recreateIfNeeded on no-preload-671514: state=Stopped err=<nil>
	W0401 20:38:47.176470  347136 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:47.178562  347136 out.go:177] * Restarting existing docker container for "no-preload-671514" ...
	
	
	==> CRI-O <==
	Apr 01 20:36:04 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:04.239296386Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=631736c2-e76b-4f49-b285-7b1e622238f9 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:19.239574662Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=82fe0b5f-4e49-4c87-b8b7-4ac263f66824 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:19 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:19.239837513Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=82fe0b5f-4e49-4c87-b8b7-4ac263f66824 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:31 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:31.239777492Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=404e07ee-8f16-4f08-b311-849c7e037704 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:31 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:31.240080546Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=404e07ee-8f16-4f08-b311-849c7e037704 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:44 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:44.239727513Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ed34fdea-a1b0-4f5d-a79b-f575dead7360 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:44 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:44.240020710Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ed34fdea-a1b0-4f5d-a79b-f575dead7360 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:59 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:59.239053025Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d11f7b90-4a5a-45c0-970d-5abb02b4f01b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:36:59 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:36:59.239349028Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d11f7b90-4a5a-45c0-970d-5abb02b4f01b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:11.238993696Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a8d0d41f-c2f8-421a-ba6c-c58b2fcb27b6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:11 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:11.239298214Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a8d0d41f-c2f8-421a-ba6c-c58b2fcb27b6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:26.239631916Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1abc4fa1-4ed2-4fd0-acb7-e440ef805602 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:26 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:26.239927093Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1abc4fa1-4ed2-4fd0-acb7-e440ef805602 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:38 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:38.238973826Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2635bb80-0058-45ea-8133-8f603e6c3341 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:38 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:38.239275459Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2635bb80-0058-45ea-8133-8f603e6c3341 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:49 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:49.239060808Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6085be2d-56ab-4a0e-bdef-922e1e568883 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:37:49 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:37:49.239366260Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6085be2d-56ab-4a0e-bdef-922e1e568883 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:00 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:00.238988877Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=459600eb-a040-4bb6-bef9-35d29b92c390 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:00 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:00.239232247Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=459600eb-a040-4bb6-bef9-35d29b92c390 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:14 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:14.239094030Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=29d04a0b-2aa1-4392-9794-18a3db9b42da name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:14 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:14.239362592Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=29d04a0b-2aa1-4392-9794-18a3db9b42da name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:25 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:25.239560484Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a216c03f-7361-4626-8347-ae0ebe94c461 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:25 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:25.239818970Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a216c03f-7361-4626-8347-ae0ebe94c461 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:39 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:39.239400963Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=35102049-1ee0-4e7a-b318-56e8fd2436c4 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:38:39 default-k8s-diff-port-993330 crio[1041]: time="2025-04-01 20:38:39.239698339Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=35102049-1ee0-4e7a-b318-56e8fd2436c4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	901ead14674ca       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                0                   afd16935a506b       kube-proxy-btnmc
	0582ac1eac9e7       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   12 minutes ago      Running             kube-controller-manager   0                   50a8fff230f0e       kube-controller-manager-default-k8s-diff-port-993330
	38f17c6d6c18d       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   12 minutes ago      Running             kube-apiserver            0                   9bfb2a6c26975       kube-apiserver-default-k8s-diff-port-993330
	21b9dbd8d6257       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   12 minutes ago      Running             kube-scheduler            0                   f74b59a5b87b8       kube-scheduler-default-k8s-diff-port-993330
	265bcef800f65       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   12 minutes ago      Running             etcd                      0                   d24837c573a23       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:38:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:36:20 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f9efd91622a43ff8c62538d2a5dee6c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [265bcef800f65f87a982f41760a50d05b8b471734d0c9eb3c0aedfa4ea71219e] <==
	{"level":"info","ts":"2025-04-01T20:26:34.535313Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:26:34.535354Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:26:35.074806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-04-01T20:26:35.074890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.074916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:26:35.075737Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076316Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:26:35.076337Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:26:35.076563Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076690Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:26:35.076703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.076729Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:26:35.077119Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:26:35.077990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:26:35.118248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:36:35.484768Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":537}
	{"level":"info","ts":"2025-04-01T20:36:35.489426Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":537,"took":"4.372726ms","hash":3434463065,"current-db-size-bytes":1343488,"current-db-size":"1.3 MB","current-db-size-in-use-bytes":1343488,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-04-01T20:36:35.489478Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3434463065,"revision":537,"compact-revision":-1}
	
	
	==> kernel <==
	 20:38:50 up  1:21,  0 users,  load average: 1.29, 1.01, 1.66
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [38f17c6d6c18db0d9f10a0d87db28e50ce8bb1d3e5d521a5fb71b3b079328b39] <==
	I0401 20:26:36.918604       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0401 20:26:36.919358       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0401 20:26:36.919648       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0401 20:26:36.919688       1 aggregator.go:171] initial CRD sync complete...
	I0401 20:26:36.919698       1 autoregister_controller.go:144] Starting autoregister controller
	I0401 20:26:36.919706       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0401 20:26:36.919713       1 cache.go:39] Caches are synced for autoregister controller
	I0401 20:26:36.922458       1 controller.go:615] quota admission added evaluator for: namespaces
	E0401 20:26:36.926891       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0401 20:26:36.977970       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0401 20:26:37.801366       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0401 20:26:37.809683       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0401 20:26:37.809825       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0401 20:26:38.352965       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:26:38.395811       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:26:38.529158       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0401 20:26:38.534999       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0401 20:26:38.536167       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:26:38.541299       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:26:38.843867       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0401 20:26:39.334365       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0401 20:26:39.350211       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0401 20:26:39.357875       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0401 20:26:44.028415       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:26:44.426310       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0582ac1eac9e7fe6cc9ae5fe1a2fdbca64dc6f2415721e0e6f9cd8e075c2f7ac] <==
	I0401 20:26:43.392867       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0401 20:26:43.392908       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0401 20:26:43.392986       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:26:43.393143       1 shared_informer.go:320] Caches are synced for GC
	I0401 20:26:43.393352       1 shared_informer.go:320] Caches are synced for crt configmap
	I0401 20:26:43.393515       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0401 20:26:43.393537       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0401 20:26:43.393631       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0401 20:26:43.393845       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:26:43.393921       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:26:43.394241       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0401 20:26:43.394351       1 shared_informer.go:320] Caches are synced for TTL
	I0401 20:26:43.395317       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:26:43.395549       1 shared_informer.go:320] Caches are synced for ephemeral
	I0401 20:26:43.397813       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.398860       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:26:43.414109       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:26:44.334167       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	I0401 20:26:44.628314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="595.822096ms"
	I0401 20:26:44.640479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.099169ms"
	I0401 20:26:44.645556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="5.030169ms"
	I0401 20:26:44.656890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="11.205665ms"
	I0401 20:26:44.656986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="57.73µs"
	I0401 20:31:15.084162       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	I0401 20:36:20.634301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	
	
	==> kube-proxy [901ead14674ca902c80ccfab27785fd598218cda7bce2cad3a9ca70939f51f28] <==
	I0401 20:26:44.894601       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:26:44.998268       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:26:44.998336       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:26:45.018925       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:26:45.019003       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:26:45.021196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:26:45.021635       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:26:45.021671       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:26:45.023446       1 config.go:329] "Starting node config controller"
	I0401 20:26:45.023539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:26:45.023422       1 config.go:199] "Starting service config controller"
	I0401 20:26:45.023632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:26:45.023440       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:26:45.023689       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:26:45.124011       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:26:45.124014       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:26:45.124012       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [21b9dbd8d62576a9c01ee56d38988a8024ae1ee6a6c4d006a881f902776b6225] <==
	W0401 20:26:37.744064       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:37.744119       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.795743       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:37.795891       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0401 20:26:37.853181       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:37.853456       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.899070       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:37.899125       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.935695       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:37.935832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.974076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:37.974251       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.986936       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:37.986983       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:37.999704       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0401 20:26:37.999872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.064871       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.064927       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.073640       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:38.073691       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.139325       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:38.139494       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0401 20:26:38.164184       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:38.164333       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0401 20:26:39.623914       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:37:59 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:37:59.480806    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:00 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:00.239558    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:04 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:04.482342    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.350378    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539889350180747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.350422    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539889350180747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:09 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:09.483905    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:14 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:14.239677    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:14 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:14.484881    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.351343    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539899351177296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.351375    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539899351177296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:19 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:19.485322    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:24 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:24.486590    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:25 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:25.240068    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.352329    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539909352125299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.352367    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539909352125299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:29 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:29.487443    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:34 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:34.488341    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.239969    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.353330    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539919353138268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.353371    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539919353138268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:39 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:39.491956    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:44 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:44.493155    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:38:49 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:49.354541    1652 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539929354336668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:49 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:49.354577    1652 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743539929354336668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:38:49 default-k8s-diff-port-993330 kubelet[1652]: E0401 20:38:49.494845    1652 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1 (115.314905ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7wrpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  2m40s (x2 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (484.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (250.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m8.658441737s)

                                                
                                                
-- stdout --
	* [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Restarting existing docker container for "no-preload-671514" ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:38:46.936490  347136 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:46.937267  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937279  347136 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:46.937283  347136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:46.937483  347136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:46.938093  347136 out.go:352] Setting JSON to false
	I0401 20:38:46.939336  347136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4873,"bootTime":1743535054,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:46.939416  347136 start.go:139] virtualization: kvm guest
	I0401 20:38:46.941391  347136 out.go:177] * [no-preload-671514] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:46.942731  347136 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:46.942777  347136 notify.go:220] Checking for updates...
	I0401 20:38:46.945003  347136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:46.946154  347136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:46.947439  347136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:46.948753  347136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:46.949903  347136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:46.951546  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:46.952045  347136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:46.979943  347136 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:46.980058  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.045628  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.033607616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.045796  347136 docker.go:318] overlay module found
	I0401 20:38:47.048624  347136 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:47.049864  347136 start.go:297] selected driver: docker
	I0401 20:38:47.049880  347136 start.go:901] validating driver "docker" against &{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.049961  347136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:47.050761  347136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:47.117041  347136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:38:47.106419089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:47.117471  347136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:47.117515  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:47.117580  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:47.117639  347136 start.go:340] cluster config:
	{Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:47.120421  347136 out.go:177] * Starting "no-preload-671514" primary control-plane node in "no-preload-671514" cluster
	I0401 20:38:47.121737  347136 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:47.123130  347136 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:47.124427  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:47.124518  347136 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:47.124567  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mk74d06c30fde6972f1a0a4a22af69395cb6e1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124713  347136 cache.go:107] acquiring lock: {Name:mkf4e5cada287eff14b4b5f4964c567c9d80cc53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124806  347136 cache.go:107] acquiring lock: {Name:mkb06bbec53b7f1b472a2beeeb931cba42a6f35d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124812  347136 cache.go:107] acquiring lock: {Name:mk39295c3022f200f39c7bdf650e2c58cd1efcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124821  347136 cache.go:107] acquiring lock: {Name:mk57c3464a5a1fcaecd1fe3cd24e0eda2d35c33f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124871  347136 cache.go:107] acquiring lock: {Name:mk2c5435a367a3a2beb80f3fccfe037c7cc35b73 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.124886  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0401 20:38:47.124904  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0401 20:38:47.124917  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 exists
	I0401 20:38:47.124925  347136 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2" took 58.4µs
	I0401 20:38:47.124937  347136 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.2 succeeded
	I0401 20:38:47.124920  347136 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 132.796µs
	I0401 20:38:47.124950  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 exists
	I0401 20:38:47.124967  347136 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2" took 266.852µs
	I0401 20:38:47.124984  347136 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.2 succeeded
	I0401 20:38:47.124950  347136 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0401 20:38:47.124898  347136 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 93.38µs
	I0401 20:38:47.124997  347136 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0401 20:38:47.124908  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0401 20:38:47.124924  347136 cache.go:107] acquiring lock: {Name:mk22905b9fefaa930092acc1fcf69fac77e98af8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125007  347136 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 300.163µs
	I0401 20:38:47.125016  347136 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0401 20:38:47.125051  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0401 20:38:47.125060  347136 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 139.313µs
	I0401 20:38:47.125072  347136 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0401 20:38:47.125103  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 exists
	I0401 20:38:47.125122  347136 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2" took 380.281µs
	I0401 20:38:47.125135  347136 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.2 succeeded
	I0401 20:38:47.125181  347136 cache.go:107] acquiring lock: {Name:mk0e3517af90b85369c1dd5412a6204490e6665d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.125270  347136 cache.go:115] /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 exists
	I0401 20:38:47.125286  347136 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.2" -> "/home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2" took 161.592µs
	I0401 20:38:47.125299  347136 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.2 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.2 succeeded
	I0401 20:38:47.125308  347136 cache.go:87] Successfully saved all images to host disk.
	I0401 20:38:47.151197  347136 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:47.151225  347136 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:47.151245  347136 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:47.151281  347136 start.go:360] acquireMachinesLock for no-preload-671514: {Name:mke8e7ca98bfe86ab362882ba4ee610904de7aac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:47.151359  347136 start.go:364] duration metric: took 54.86µs to acquireMachinesLock for "no-preload-671514"
	I0401 20:38:47.151382  347136 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:47.151393  347136 fix.go:54] fixHost starting: 
	I0401 20:38:47.151728  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.176435  347136 fix.go:112] recreateIfNeeded on no-preload-671514: state=Stopped err=<nil>
	W0401 20:38:47.176470  347136 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:47.178562  347136 out.go:177] * Restarting existing docker container for "no-preload-671514" ...
	I0401 20:38:47.179983  347136 cli_runner.go:164] Run: docker start no-preload-671514
	I0401 20:38:47.510086  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:47.532191  347136 kic.go:430] container "no-preload-671514" state is running.
	I0401 20:38:47.532575  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:47.559308  347136 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/config.json ...
	I0401 20:38:47.559517  347136 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:47.559564  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:47.584697  347136 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:47.584927  347136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0401 20:38:47.584941  347136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:47.585657  347136 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51028->127.0.0.1:33108: read: connection reset by peer
	I0401 20:38:50.725952  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671514
	
	I0401 20:38:50.725988  347136 ubuntu.go:169] provisioning hostname "no-preload-671514"
	I0401 20:38:50.726050  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:50.749136  347136 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:50.749458  347136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0401 20:38:50.749479  347136 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-671514 && echo "no-preload-671514" | sudo tee /etc/hostname
	I0401 20:38:50.904622  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-671514
	
	I0401 20:38:50.904687  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:50.926258  347136 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:50.926536  347136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0401 20:38:50.926566  347136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-671514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-671514/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-671514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:51.066772  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:51.066801  347136 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:51.066823  347136 ubuntu.go:177] setting up certificates
	I0401 20:38:51.066835  347136 provision.go:84] configureAuth start
	I0401 20:38:51.066889  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:51.087833  347136 provision.go:143] copyHostCerts
	I0401 20:38:51.087902  347136 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:51.087921  347136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:51.088001  347136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:51.088136  347136 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:51.088151  347136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:51.088200  347136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:51.088291  347136 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:51.088300  347136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:51.088339  347136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:51.088866  347136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.no-preload-671514 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-671514]
	I0401 20:38:51.493850  347136 provision.go:177] copyRemoteCerts
	I0401 20:38:51.493918  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:51.493963  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:51.520049  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:51.625831  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:38:51.658669  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:51.690244  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:51.713085  347136 provision.go:87] duration metric: took 646.236061ms to configureAuth
	I0401 20:38:51.713116  347136 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:51.713337  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:51.713449  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:51.744148  347136 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:51.744461  347136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0401 20:38:51.744490  347136 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	* 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-671514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347539,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:47.214891198Z",
	            "FinishedAt": "2025-04-01T20:38:46.056346181Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bbc852e72936fcd498ad1c3a51d7c1f88352c6a93862744e1874c53a1007c0b",
	            "SandboxKey": "/var/run/docker/netns/5bbc852e7293",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:42:07:e3:85:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "3e43b7030559efe8587100f9aafe4e5d830bd7b517b3927b0b1dddcdf10d9cd5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-671514 logs -n 25: (1.007916359s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:38:59 no-preload-671514 crio[550]: time="2025-04-01 20:38:59.218582067Z" level=info msg="Started container" PID=1174 containerID=ea145bd33786beab5695edea53c4427b5de9ac7e59c201cefdd36226f43e54ca description=kube-system/kube-proxy-pfvch/kube-proxy id=e8257ed1-83cd-498c-a10d-72ff17ae77c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce01896c90f7740599d1a39fcd8b5c1b9078f803f3ea9d15853f2a3977380487
	Apr 01 20:39:31 no-preload-671514 crio[550]: time="2025-04-01 20:39:31.696684338Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8a98633a-de75-4c92-8f52-dd87df47c641 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:31 no-preload-671514 crio[550]: time="2025-04-01 20:39:31.696914841Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8a98633a-de75-4c92-8f52-dd87df47c641 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:44 no-preload-671514 crio[550]: time="2025-04-01 20:39:44.555122645Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4bfa4ad6-3945-40a9-9897-3e529802feb2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:44 no-preload-671514 crio[550]: time="2025-04-01 20:39:44.555498548Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4bfa4ad6-3945-40a9-9897-3e529802feb2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:44 no-preload-671514 crio[550]: time="2025-04-01 20:39:44.555957850Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1586417d-ff43-44cc-8170-90ba51f0e038 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:39:44 no-preload-671514 crio[550]: time="2025-04-01 20:39:44.557162170Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:40:27 no-preload-671514 crio[550]: time="2025-04-01 20:40:27.554944768Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=9a5349fe-3c2c-4e54-a2a4-ef9cd3b6f7d3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:27 no-preload-671514 crio[550]: time="2025-04-01 20:40:27.555247033Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=9a5349fe-3c2c-4e54-a2a4-ef9cd3b6f7d3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:38 no-preload-671514 crio[550]: time="2025-04-01 20:40:38.555169510Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=dd3648d2-a227-4e59-9006-341865bedf11 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:38 no-preload-671514 crio[550]: time="2025-04-01 20:40:38.555455537Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=dd3648d2-a227-4e59-9006-341865bedf11 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:38 no-preload-671514 crio[550]: time="2025-04-01 20:40:38.555956860Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c4fc620d-bc54-4d57-8f80-01d8201c85f0 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:40:38 no-preload-671514 crio[550]: time="2025-04-01 20:40:38.557238182Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:41:23 no-preload-671514 crio[550]: time="2025-04-01 20:41:23.555285972Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f0433bda-a717-43e5-ade4-acd1e3a8fc36 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:23 no-preload-671514 crio[550]: time="2025-04-01 20:41:23.555547295Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=f0433bda-a717-43e5-ade4-acd1e3a8fc36 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:37 no-preload-671514 crio[550]: time="2025-04-01 20:41:37.555590781Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=55bd4885-fd55-41b5-bdc8-7d4b58a07233 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:37 no-preload-671514 crio[550]: time="2025-04-01 20:41:37.555850644Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=55bd4885-fd55-41b5-bdc8-7d4b58a07233 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:49 no-preload-671514 crio[550]: time="2025-04-01 20:41:49.555497262Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a2970d68-6d30-42c7-bfb6-94c33f5ba498 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:49 no-preload-671514 crio[550]: time="2025-04-01 20:41:49.555731119Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a2970d68-6d30-42c7-bfb6-94c33f5ba498 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:02 no-preload-671514 crio[550]: time="2025-04-01 20:42:02.555095482Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ebc76da8-afbe-4afc-b79a-5593d59ade39 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:02 no-preload-671514 crio[550]: time="2025-04-01 20:42:02.555292562Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ebc76da8-afbe-4afc-b79a-5593d59ade39 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:02 no-preload-671514 crio[550]: time="2025-04-01 20:42:02.555780003Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=27788785-06af-486e-a14e-3f1b5d0b1a2e name=/runtime.v1.ImageService/PullImage
	Apr 01 20:42:02 no-preload-671514 crio[550]: time="2025-04-01 20:42:02.556945793Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:42:49 no-preload-671514 crio[550]: time="2025-04-01 20:42:49.555637663Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e5851a6f-b25d-4b6f-a0c1-3558bd6acc14 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:49 no-preload-671514 crio[550]: time="2025-04-01 20:42:49.556004461Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e5851a6f-b25d-4b6f-a0c1-3558bd6acc14 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea145bd33786b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   3 minutes ago       Running             kube-proxy                1                   ce01896c90f77       kube-proxy-pfvch
	ee48c6782a18b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            1                   56ea918890fe0       kube-apiserver-no-preload-671514
	c433696fcee19       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   1                   84d0bba648e43       kube-controller-manager-no-preload-671514
	b1d13381b02cc       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            1                   b988612136b4f       kube-scheduler-no-preload-671514
	c26ee68cb1e41       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      1                   aba801a800b41       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:42:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:38:58 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:38:58 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:38:58 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:38:58 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 607874eb563c47059868a4160125dbb6
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 3m56s                kube-proxy       
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                  node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	  Normal   Starting                 4m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m2s (x8 over 4m2s)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m54s                node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [c26ee68cb1e41434cb1773276a80f9b07dd93b734f39daae74d2886e50d29ba0] <==
	{"level":"info","ts":"2025-04-01T20:38:55.525239Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-01T20:38:55.525329Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-01T20:38:55.525485Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:38:55.525537Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:38:55.525733Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-04-01T20:38:55.526485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:38:55.526538Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:38:57.022450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.023544Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:38:57.023604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.023936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.024487Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.024568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.025105Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:38:57.025225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:00.430539Z","caller":"traceutil/trace.go:171","msg":"trace[280012238] transaction","detail":"{read_only:false; response_revision:772; number_of_response:1; }","duration":"101.218224ms","start":"2025-04-01T20:39:00.329302Z","end":"2025-04-01T20:39:00.430521Z","steps":["trace[280012238] 'process raft request'  (duration: 46.826091ms)","trace[280012238] 'compare'  (duration: 54.291765ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:00.548330Z","caller":"traceutil/trace.go:171","msg":"trace[1807709246] transaction","detail":"{read_only:false; response_revision:773; number_of_response:1; }","duration":"108.767351ms","start":"2025-04-01T20:39:00.439528Z","end":"2025-04-01T20:39:00.548295Z","steps":["trace[1807709246] 'process raft request'  (duration: 96.291629ms)","trace[1807709246] 'compare'  (duration: 12.091718ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:42:56 up  1:25,  0 users,  load average: 1.22, 1.10, 1.56
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ee48c6782a18ba4755d82a0a5bf1ad1b855dfd1d70fdd7295d33e8a88f8775d5] <==
	I0401 20:39:00.431160       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.131.32"}
	I0401 20:39:00.555154       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.170.4"}
	I0401 20:39:02.315118       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:39:02.315172       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:39:02.516478       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:39:02.516480       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:39:02.863625       1 controller.go:615] quota admission added evaluator for: endpoints
	W0401 20:39:59.147870       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:39:59.147894       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:39:59.147937       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:39:59.147963       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:39:59.149060       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:39:59.149074       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:41:59.149677       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:41:59.149677       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:41:59.149740       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:41:59.149818       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:41:59.150882       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:41:59.150898       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c433696fcee19b99e87b3d9433f8add31e3b93cb7663068ef9be96761a9725fd] <==
	I0401 20:39:02.384942       1 shared_informer.go:320] Caches are synced for expand
	I0401 20:39:02.389032       1 shared_informer.go:320] Caches are synced for attach detach
	I0401 20:39:02.393244       1 shared_informer.go:320] Caches are synced for PVC protection
	I0401 20:39:02.395421       1 shared_informer.go:320] Caches are synced for endpoint
	I0401 20:39:02.402190       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0401 20:39:02.932291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="409.428059ms"
	I0401 20:39:02.943180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="419.96954ms"
	I0401 20:39:02.957046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="13.747009ms"
	I0401 20:39:02.957201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="54.004µs"
	I0401 20:39:02.963882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="31.457109ms"
	I0401 20:39:02.964238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="215.603µs"
	E0401 20:39:32.320395       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:39:32.364872       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:02.326843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:02.372750       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:32.332848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:32.380470       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:02.338168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:02.387714       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:32.343940       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:32.395420       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:02.349847       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:02.403270       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:32.355490       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:32.410445       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ea145bd33786beab5695edea53c4427b5de9ac7e59c201cefdd36226f43e54ca] <==
	I0401 20:38:59.352570       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:38:59.739049       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:38:59.739232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:38:59.932876       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:38:59.932949       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:38:59.936073       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:38:59.936478       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:38:59.936515       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:59.939364       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:00.018698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:38:59.961970       1 config.go:199] "Starting service config controller"
	I0401 20:39:00.018788       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:38:59.963606       1 config.go:329] "Starting node config controller"
	I0401 20:39:00.018803       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:00.121850       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:39:00.121958       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:00.122020       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b1d13381b02cc94d594efb9905918a3d246d7722a4c6dbc1796409ac561c2e3d] <==
	I0401 20:38:56.385160       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:38:58.139246       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:38:58.139285       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:38:58.139315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:38:58.139326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:38:58.244037       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:38:58.244065       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:58.245973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:38:58.246009       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:38:58.246168       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:38:58.246306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:38:58.348872       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:42:09 no-preload-671514 kubelet[663]: E0401 20:42:09.595368     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:14 no-preload-671514 kubelet[663]: E0401 20:42:14.572204     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540134571993082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:14 no-preload-671514 kubelet[663]: E0401 20:42:14.572251     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540134571993082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:14 no-preload-671514 kubelet[663]: E0401 20:42:14.596120     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:19 no-preload-671514 kubelet[663]: E0401 20:42:19.596763     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:24 no-preload-671514 kubelet[663]: E0401 20:42:24.573550     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540144573351882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:24 no-preload-671514 kubelet[663]: E0401 20:42:24.573589     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540144573351882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:24 no-preload-671514 kubelet[663]: E0401 20:42:24.598109     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:29 no-preload-671514 kubelet[663]: E0401 20:42:29.599804     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.334245     663 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.334318     663 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.334516     663 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{Vol
umeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82wpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,S
tdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-5tgtq_kube-system(60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5): ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.335777     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.574781     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540154574563766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.574826     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540154574563766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:34 no-preload-671514 kubelet[663]: E0401 20:42:34.601132     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:39 no-preload-671514 kubelet[663]: E0401 20:42:39.602451     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:44 no-preload-671514 kubelet[663]: E0401 20:42:44.575761     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540164575546764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:44 no-preload-671514 kubelet[663]: E0401 20:42:44.575802     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540164575546764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:44 no-preload-671514 kubelet[663]: E0401 20:42:44.603774     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:49 no-preload-671514 kubelet[663]: E0401 20:42:49.556356     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:42:49 no-preload-671514 kubelet[663]: E0401 20:42:49.604947     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:54 no-preload-671514 kubelet[663]: E0401 20:42:54.577007     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540174576829922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:54 no-preload-671514 kubelet[663]: E0401 20:42:54.577046     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540174576829922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:54 no-preload-671514 kubelet[663]: E0401 20:42:54.606190     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1 (76.93954ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hxxvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m59s                default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  6m53s (x2 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-28pk4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-nmk5v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-d2blk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (250.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (255.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m13.73420807s)

                                                
                                                
-- stdout --
	* [embed-certs-974821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "embed-certs-974821" primary control-plane node in "embed-certs-974821" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Restarting existing docker container for "embed-certs-974821" ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:38:53.008426  351594 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:53.008667  351594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:53.008677  351594 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:53.008681  351594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:53.008880  351594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:53.009483  351594 out.go:352] Setting JSON to false
	I0401 20:38:53.010549  351594 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4879,"bootTime":1743535054,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:53.010609  351594 start.go:139] virtualization: kvm guest
	I0401 20:38:53.012925  351594 out.go:177] * [embed-certs-974821] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:53.014335  351594 notify.go:220] Checking for updates...
	I0401 20:38:53.014363  351594 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:53.015625  351594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:53.016916  351594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:53.017975  351594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:53.019086  351594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:53.020242  351594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:53.022022  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:53.022697  351594 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:53.055160  351594 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:53.055279  351594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:53.129383  351594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-01 20:38:53.117342846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:53.129543  351594 docker.go:318] overlay module found
	I0401 20:38:53.131898  351594 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:53.133222  351594 start.go:297] selected driver: docker
	I0401 20:38:53.133238  351594 start.go:901] validating driver "docker" against &{Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:53.133347  351594 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:53.134449  351594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:53.195891  351594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:57 SystemTime:2025-04-01 20:38:53.186734057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:53.196177  351594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:53.196208  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:53.196252  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:53.196281  351594 start.go:340] cluster config:
	{Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:53.198198  351594 out.go:177] * Starting "embed-certs-974821" primary control-plane node in "embed-certs-974821" cluster
	I0401 20:38:53.199552  351594 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:53.200633  351594 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:53.201801  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:53.201837  351594 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:53.201848  351594 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:53.201912  351594 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:53.201926  351594 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:53.201933  351594 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:53.202032  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.234486  351594 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:53.234513  351594 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:53.234530  351594 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:53.234558  351594 start.go:360] acquireMachinesLock for embed-certs-974821: {Name:mk504873d11b3a69d78cbbe682dafb679598342b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:53.234620  351594 start.go:364] duration metric: took 43.406µs to acquireMachinesLock for "embed-certs-974821"
	I0401 20:38:53.234640  351594 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:53.234645  351594 fix.go:54] fixHost starting: 
	I0401 20:38:53.234833  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.253478  351594 fix.go:112] recreateIfNeeded on embed-certs-974821: state=Stopped err=<nil>
	W0401 20:38:53.253503  351594 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:53.255273  351594 out.go:177] * Restarting existing docker container for "embed-certs-974821" ...
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	* 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-974821 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.286446875Z",
	            "FinishedAt": "2025-04-01T20:38:52.118073098Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a400a933eabcb680d1a6c739c40c6e1e691bc1d846119585a6bea14a4faf054",
	            "SandboxKey": "/var/run/docker/netns/3a400a933eab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:df:19:aa:43:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "fcd49a1d7a931c51670bb1639475ceebb2f5e6078df77f57455465bfc6426ab5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25: (1.096109681s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:39:37 embed-certs-974821 crio[550]: time="2025-04-01 20:39:37.511223134Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e8e436f3-17d8-4ef1-930e-e6266bef73a5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 embed-certs-974821 crio[550]: time="2025-04-01 20:39:50.273464036Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=94fcbb1c-8284-463a-abae-2cb62e7998e2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 embed-certs-974821 crio[550]: time="2025-04-01 20:39:50.273802143Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=94fcbb1c-8284-463a-abae-2cb62e7998e2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 embed-certs-974821 crio[550]: time="2025-04-01 20:39:50.274252304Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=89074b7e-ab3d-4ace-bc3e-dd3c8f3daf73 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:39:50 embed-certs-974821 crio[550]: time="2025-04-01 20:39:50.275435636Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:40:34 embed-certs-974821 crio[550]: time="2025-04-01 20:40:34.274278185Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2512088c-cd47-47c9-89d7-be98edc063d8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:34 embed-certs-974821 crio[550]: time="2025-04-01 20:40:34.274629530Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2512088c-cd47-47c9-89d7-be98edc063d8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:47 embed-certs-974821 crio[550]: time="2025-04-01 20:40:47.274025772Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=bbb5d70c-c386-4226-83b1-47d3f374da3d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:47 embed-certs-974821 crio[550]: time="2025-04-01 20:40:47.274311023Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=bbb5d70c-c386-4226-83b1-47d3f374da3d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:47 embed-certs-974821 crio[550]: time="2025-04-01 20:40:47.274907957Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a5dec96f-651f-427e-bb03-a077a37a5149 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:40:47 embed-certs-974821 crio[550]: time="2025-04-01 20:40:47.276112594Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:41:30 embed-certs-974821 crio[550]: time="2025-04-01 20:41:30.274512613Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=635e58d3-86ec-4219-8689-322060b66c9b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:30 embed-certs-974821 crio[550]: time="2025-04-01 20:41:30.274789256Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=635e58d3-86ec-4219-8689-322060b66c9b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:43 embed-certs-974821 crio[550]: time="2025-04-01 20:41:43.273776503Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0fdf6746-3168-4a71-89cc-3e05b0253af0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:43 embed-certs-974821 crio[550]: time="2025-04-01 20:41:43.274052757Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0fdf6746-3168-4a71-89cc-3e05b0253af0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:56 embed-certs-974821 crio[550]: time="2025-04-01 20:41:56.274981299Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c50be4af-dbfb-4d29-917b-b765fc47e0fe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:56 embed-certs-974821 crio[550]: time="2025-04-01 20:41:56.275281744Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c50be4af-dbfb-4d29-917b-b765fc47e0fe name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:09 embed-certs-974821 crio[550]: time="2025-04-01 20:42:09.273863361Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=049f4be1-6070-43e7-a477-ac2833576deb name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:09 embed-certs-974821 crio[550]: time="2025-04-01 20:42:09.274142517Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=049f4be1-6070-43e7-a477-ac2833576deb name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:09 embed-certs-974821 crio[550]: time="2025-04-01 20:42:09.274710512Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=f0e89768-776e-46b1-888c-0d411517d0a2 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:42:09 embed-certs-974821 crio[550]: time="2025-04-01 20:42:09.275737280Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:42:52 embed-certs-974821 crio[550]: time="2025-04-01 20:42:52.273881014Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=dc77f0b5-b288-44a3-b9d5-dce07b77a725 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:52 embed-certs-974821 crio[550]: time="2025-04-01 20:42:52.274210974Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=dc77f0b5-b288-44a3-b9d5-dce07b77a725 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:43:06 embed-certs-974821 crio[550]: time="2025-04-01 20:43:06.274003675Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8f0a9eb6-4859-4267-a759-f4dfdd98c0c9 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:43:06 embed-certs-974821 crio[550]: time="2025-04-01 20:43:06.274266083Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8f0a9eb6-4859-4267-a759-f4dfdd98c0c9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c4be69226b22       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   4 minutes ago       Running             kube-proxy                1                   054a48bf8a57c       kube-proxy-gn6mh
	6709f6284d476       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   1                   68166a16e4ccf       kube-controller-manager-embed-certs-974821
	1b409b776938c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            1                   5a3a166087255       kube-apiserver-embed-certs-974821
	a9f1f681f3bf4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            1                   4fb08364de8f4       kube-scheduler-embed-certs-974821
	732a4bf5b37a1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      1                   d8b5cef371e62       etcd-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:42:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:39:04 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:39:04 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:39:04 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:39:04 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 28ebfe595ec94fb9a75839c7c4da9d65
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 4m1s                 kube-proxy       
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                  node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	  Normal   Starting                 4m7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m7s (x8 over 4m7s)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m59s                node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [732a4bf5b37a17d64428372c4b341ca0176e303c278397947fc37e81f445b747] <==
	{"level":"info","ts":"2025-04-01T20:39:03.342579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:03.342621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:03.344600Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:embed-certs-974821 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:39:03.345939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:03.345955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:03.347047Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:03.347143Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:03.348433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:03.347178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348580Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-04-01T20:39:04.920589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.306335ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571761152512035446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:665 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.94.2\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-04-01T20:39:04.921414Z","caller":"traceutil/trace.go:171","msg":"trace[478374922] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"174.148343ms","start":"2025-04-01T20:39:04.747247Z","end":"2025-04-01T20:39:04.921396Z","steps":["trace[478374922] 'process raft request'  (duration: 174.071396ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921615Z","caller":"traceutil/trace.go:171","msg":"trace[1294899020] linearizableReadLoop","detail":"{readStateIndex:873; appliedIndex:872; }","duration":"174.902577ms","start":"2025-04-01T20:39:04.746663Z","end":"2025-04-01T20:39:04.921566Z","steps":["trace[1294899020] 'read index received'  (duration: 981.565µs)","trace[1294899020] 'applied index is now lower than readState.Index'  (duration: 173.918021ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:04.921658Z","caller":"traceutil/trace.go:171","msg":"trace[1643816995] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"174.752569ms","start":"2025-04-01T20:39:04.746898Z","end":"2025-04-01T20:39:04.921650Z","steps":["trace[1643816995] 'process raft request'  (duration: 174.347461ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921801Z","caller":"traceutil/trace.go:171","msg":"trace[214304335] transaction","detail":"{read_only:false; number_of_response:1; response_revision:699; }","duration":"175.517874ms","start":"2025-04-01T20:39:04.746273Z","end":"2025-04-01T20:39:04.921791Z","steps":["trace[214304335] 'compare'  (duration: 172.157301ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.921867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.179491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-974821\" limit:1 ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2025-04-01T20:39:04.922390Z","caller":"traceutil/trace.go:171","msg":"trace[1175626099] range","detail":"{range_begin:/registry/minions/embed-certs-974821; range_end:; response_count:1; response_revision:701; }","duration":"175.735808ms","start":"2025-04-01T20:39:04.746639Z","end":"2025-04-01T20:39:04.922375Z","steps":["trace[1175626099] 'agreement among raft nodes before linearized reading'  (duration: 175.172297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.922892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.707137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:92298"}
	{"level":"info","ts":"2025-04-01T20:39:04.922963Z","caller":"traceutil/trace.go:171","msg":"trace[382725270] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:701; }","duration":"104.813727ms","start":"2025-04-01T20:39:04.818140Z","end":"2025-04-01T20:39:04.922954Z","steps":["trace[382725270] 'agreement among raft nodes before linearized reading'  (duration: 104.571539ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.923317Z","caller":"traceutil/trace.go:171","msg":"trace[1182439] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:701; }","duration":"104.889107ms","start":"2025-04-01T20:39:04.818419Z","end":"2025-04-01T20:39:04.923308Z","steps":["trace[1182439] 'agreement among raft nodes before linearized reading'  (duration: 104.87954ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.923503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.18834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-04-01T20:39:04.923557Z","caller":"traceutil/trace.go:171","msg":"trace[53470254] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:701; }","duration":"105.257596ms","start":"2025-04-01T20:39:04.818292Z","end":"2025-04-01T20:39:04.923549Z","steps":["trace[53470254] 'agreement among raft nodes before linearized reading'  (duration: 105.178511ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:37.619038Z","caller":"traceutil/trace.go:171","msg":"trace[512211353] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"105.547476ms","start":"2025-04-01T20:39:37.513466Z","end":"2025-04-01T20:39:37.619014Z","steps":["trace[512211353] 'process raft request'  (duration: 43.691695ms)","trace[512211353] 'compare'  (duration: 61.757597ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:37.620916Z","caller":"traceutil/trace.go:171","msg":"trace[1272640698] transaction","detail":"{read_only:false; response_revision:824; number_of_response:1; }","duration":"101.494988ms","start":"2025-04-01T20:39:37.519401Z","end":"2025-04-01T20:39:37.620896Z","steps":["trace[1272640698] 'process raft request'  (duration: 101.291053ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:43:07 up  1:25,  0 users,  load average: 1.27, 1.11, 1.56
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [1b409b776938c7f6d6325283fe8d5f7d2038212e8bab65b45b30c12beae6f139] <==
	I0401 20:39:06.726703       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:39:06.743280       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:39:06.864503       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.70.77"}
	I0401 20:39:06.879215       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.249.166"}
	I0401 20:39:09.002162       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:39:09.152793       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0401 20:39:09.303104       1 controller.go:615] quota admission added evaluator for: endpoints
	W0401 20:40:05.732789       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:40:05.732813       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:40:05.732850       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:40:05.732885       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:40:05.733965       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:40:05.733981       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:42:05.734136       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:42:05.734141       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:42:05.734225       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:42:05.734250       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:42:05.735364       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:42:05.735427       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6709f6284d476f9efda2e9d43e571a75efeb97855b385ce4b1586eaa4de4f1a9] <==
	I0401 20:39:08.803528       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:39:08.809715       1 shared_informer.go:320] Caches are synced for resource quota
	I0401 20:39:08.811902       1 shared_informer.go:320] Caches are synced for daemon sets
	I0401 20:39:08.819204       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:39:09.413906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="258.847461ms"
	I0401 20:39:09.414015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="257.645027ms"
	I0401 20:39:09.419606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="5.536827ms"
	I0401 20:39:09.419618       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="5.659121ms"
	I0401 20:39:09.419809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="45.68µs"
	I0401 20:39:09.419809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="76.59µs"
	I0401 20:39:09.424036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="52.475µs"
	E0401 20:39:38.808817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:39:38.826902       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:08.814388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:08.833590       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:38.820060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:38.840474       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:08.825708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:08.847728       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:38.831476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:38.855415       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:08.836869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:08.862448       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:38.842049       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:38.869647       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0c4be69226b22952a80da0c17c51cbc7f4486bc715cbe15cc3dd88daecfaf452] <==
	I0401 20:39:06.072071       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:06.448227       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:39:06.461903       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:06.641034       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:06.641193       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:06.661209       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:06.661731       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:06.661779       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.671952       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:06.673686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:06.672521       1 config.go:329] "Starting node config controller"
	I0401 20:39:06.673736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:06.672555       1 config.go:199] "Starting service config controller"
	I0401 20:39:06.673765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:06.774792       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:06.774838       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:06.775459       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9f1f681f3bf4be0d5f99a181b4ddfe1efade3b57adf4f7e82926d6306363cec] <==
	I0401 20:39:02.378239       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:04.549023       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:04.549065       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:04.549076       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:04.549086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:04.727215       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:04.727317       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:04.729809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:04.729861       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:04.730096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:04.730177       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:04.842475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:42:20 embed-certs-974821 kubelet[676]: E0401 20:42:20.179592     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540140179345889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:20 embed-certs-974821 kubelet[676]: E0401 20:42:20.197598     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:25 embed-certs-974821 kubelet[676]: E0401 20:42:25.198464     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:30 embed-certs-974821 kubelet[676]: E0401 20:42:30.180733     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540150180501207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:30 embed-certs-974821 kubelet[676]: E0401 20:42:30.180777     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540150180501207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:30 embed-certs-974821 kubelet[676]: E0401 20:42:30.199594     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:35 embed-certs-974821 kubelet[676]: E0401 20:42:35.200750     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:40 embed-certs-974821 kubelet[676]: E0401 20:42:40.181845     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540160181640652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:40 embed-certs-974821 kubelet[676]: E0401 20:42:40.181889     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540160181640652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:40 embed-certs-974821 kubelet[676]: E0401 20:42:40.202516     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:41 embed-certs-974821 kubelet[676]: E0401 20:42:41.037666     676 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:41 embed-certs-974821 kubelet[676]: E0401 20:42:41.037769     676 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:41 embed-certs-974821 kubelet[676]: E0401 20:42:41.037957     676 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{Vo
lumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqrvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,
StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-bq54h_kube-system(f880d90a-5596-4ce4-b2e9-ab4094de1621): ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:42:41 embed-certs-974821 kubelet[676]: E0401 20:42:41.039183     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:42:45 embed-certs-974821 kubelet[676]: E0401 20:42:45.203441     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:50 embed-certs-974821 kubelet[676]: E0401 20:42:50.182975     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540170182742976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:50 embed-certs-974821 kubelet[676]: E0401 20:42:50.183010     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540170182742976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:50 embed-certs-974821 kubelet[676]: E0401 20:42:50.204696     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:52 embed-certs-974821 kubelet[676]: E0401 20:42:52.274489     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:42:55 embed-certs-974821 kubelet[676]: E0401 20:42:55.206425     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:43:00 embed-certs-974821 kubelet[676]: E0401 20:43:00.184285     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540180184052963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:43:00 embed-certs-974821 kubelet[676]: E0401 20:43:00.184334     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540180184052963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:43:00 embed-certs-974821 kubelet[676]: E0401 20:43:00.207236     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:43:05 embed-certs-974821 kubelet[676]: E0401 20:43:05.208511     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:43:06 embed-certs-974821 kubelet[676]: E0401 20:43:06.274510     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1 (94.70518ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qwn44:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age               From               Message
	  ----     ------            ----              ----               -------
	  Warning  FailedScheduling  4m3s              default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  7m (x2 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-nnhr5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-x6nnb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-q2fjx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (255.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (256.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 80 (4m14.030266847s)

                                                
                                                
-- stdout --
	* [old-k8s-version-964633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-964633" primary control-plane node in "old-k8s-version-964633" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Restarting existing docker container for "old-k8s-version-964633" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:38:53.341629  351961 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:53.346010  351961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:53.346029  351961 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:53.346036  351961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:53.346315  351961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:53.348398  351961 out.go:352] Setting JSON to false
	I0401 20:38:53.349566  351961 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4879,"bootTime":1743535054,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:53.349726  351961 start.go:139] virtualization: kvm guest
	I0401 20:38:53.352139  351961 out.go:177] * [old-k8s-version-964633] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:53.354053  351961 notify.go:220] Checking for updates...
	I0401 20:38:53.354117  351961 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:53.357211  351961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:53.358823  351961 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:53.361828  351961 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:53.363106  351961 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:53.364384  351961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:53.366081  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:53.368361  351961 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
	I0401 20:38:53.369569  351961 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:53.397187  351961 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:53.397318  351961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:53.446742  351961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-04-01 20:38:53.438185068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:53.446851  351961 docker.go:318] overlay module found
	I0401 20:38:53.448738  351961 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:53.449905  351961 start.go:297] selected driver: docker
	I0401 20:38:53.449918  351961 start.go:901] validating driver "docker" against &{Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:53.450022  351961 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:53.450885  351961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:53.505975  351961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:64 SystemTime:2025-04-01 20:38:53.496412103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:53.506384  351961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:53.506418  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:38:53.506478  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:53.506524  351961 start.go:340] cluster config:
	{Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:53.509275  351961 out.go:177] * Starting "old-k8s-version-964633" primary control-plane node in "old-k8s-version-964633" cluster
	I0401 20:38:53.510425  351961 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:53.511659  351961 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:53.512761  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:53.512796  351961 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:53.512802  351961 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:53.512863  351961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:53.512894  351961 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:53.512909  351961 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 20:38:53.513040  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.536318  351961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:53.536338  351961 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:53.536352  351961 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:53.536378  351961 start.go:360] acquireMachinesLock for old-k8s-version-964633: {Name:mkcf81b33459cdbb9c109c2df72357b4097207d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:53.536446  351961 start.go:364] duration metric: took 37.128µs to acquireMachinesLock for "old-k8s-version-964633"
	I0401 20:38:53.536466  351961 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:53.536477  351961 fix.go:54] fixHost starting: 
	I0401 20:38:53.536722  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.556357  351961 fix.go:112] recreateIfNeeded on old-k8s-version-964633: state=Stopped err=<nil>
	W0401 20:38:53.556391  351961 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:53.558265  351961 out.go:177] * Restarting existing docker container for "old-k8s-version-964633" ...
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	* 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-964633 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.587755812Z",
	            "FinishedAt": "2025-04-01T20:38:52.359374523Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98507353cdf3ad29538d69a6c2ab371dc9afedd5474261071e73baebb06da200",
	            "SandboxKey": "/var/run/docker/netns/98507353cdf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:45:5d:ae:77:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "97180c448aba15ca3cf07e1fc19eac60b297d564aac63d5f4b5b7521b5a4989c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.187253469s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:39:19 old-k8s-version-964633 crio[545]: time="2025-04-01 20:39:19.399595730Z" level=info msg="Started container" PID=1804 containerID=b6e2a15624e6bfb4518956b54ad139920c531d3fc7c23adccb5f26ae8087b4ae description=kube-system/kube-proxy-vb8ks/kube-proxy id=70c9fd8d-c43a-46a2-9ef6-4cddb4293281 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=d79aac48145edf8c532c8452a04606c6a56d97ce9aa72261c5cbf77a4a508d97
	Apr 01 20:39:51 old-k8s-version-964633 crio[545]: time="2025-04-01 20:39:51.081948048Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=62a82d81-2435-4d10-a341-7017b26d0294 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:39:51 old-k8s-version-964633 crio[545]: time="2025-04-01 20:39:51.082157285Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=62a82d81-2435-4d10-a341-7017b26d0294 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:01 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:01.990900389Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c29b85a2-3073-4179-a736-e6c3642362ec name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:01 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:01.991137576Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c29b85a2-3073-4179-a736-e6c3642362ec name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:01 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:01.991630244Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=438363e6-1def-41fb-a6cb-d2a50fd7670d name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:40:02 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:02.001155857Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:40:43 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:43.990848579Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=eade07e0-1224-4c26-8a57-422e9f2e6aeb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:43 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:43.991149905Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=eade07e0-1224-4c26-8a57-422e9f2e6aeb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:54 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:54.990877080Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0db27955-ab40-4a31-a849-469cbe5827e6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:54 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:54.991147923Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0db27955-ab40-4a31-a849-469cbe5827e6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:40:54 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:54.991649443Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=62090989-bfa6-4cf1-8f60-a2600f265b22 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:40:54 old-k8s-version-964633 crio[545]: time="2025-04-01 20:40:54.993012114Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:41:38 old-k8s-version-964633 crio[545]: time="2025-04-01 20:41:38.990804069Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=eb958d43-55bc-47d0-814d-2d78a8350f3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:41:38 old-k8s-version-964633 crio[545]: time="2025-04-01 20:41:38.991118082Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=eb958d43-55bc-47d0-814d-2d78a8350f3a name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:41:50 old-k8s-version-964633 crio[545]: time="2025-04-01 20:41:50.990960514Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=880c51d5-15ae-4589-b12d-4a3a5dfb39eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:41:50 old-k8s-version-964633 crio[545]: time="2025-04-01 20:41:50.991242805Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=880c51d5-15ae-4589-b12d-4a3a5dfb39eb name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:01 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:01.990937339Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=8eb8e2a6-ed37-4eba-a288-ff39ab413b04 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:01 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:01.991198454Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=8eb8e2a6-ed37-4eba-a288-ff39ab413b04 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:13 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:13.990748872Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=988b7e46-b928-4540-a200-33d986e4957e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:13 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:13.991040401Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=988b7e46-b928-4540-a200-33d986e4957e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:13 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:13.991516833Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=446211aa-65b0-44cc-971c-e9113a34f05c name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:42:13 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:13.992641500Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:42:57 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:57.990769867Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0e32a6e9-f267-42f3-8462-ca3148676df0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:42:57 old-k8s-version-964633 crio[545]: time="2025-04-01 20:42:57.991074829Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0e32a6e9-f267-42f3-8462-ca3148676df0 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6e2a15624e6b       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   3 minutes ago       Running             kube-proxy                0                   d79aac48145ed       kube-proxy-vb8ks
	476cadc498ed3       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   3 minutes ago       Running             kube-apiserver            0                   a0f2a56e33baf       kube-apiserver-old-k8s-version-964633
	1cf26e38ac1c6       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   3 minutes ago       Running             etcd                      0                   b5c714ec70c88       etcd-old-k8s-version-964633
	e1f3c07569c92       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   3 minutes ago       Running             kube-scheduler            0                   b0dee5245ff96       kube-scheduler-old-k8s-version-964633
	a5bc89e701040       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   3 minutes ago       Running             kube-controller-manager   0                   a0fa04b1b1602       kube-controller-manager-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:39:47 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:39:47 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:39:47 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:39:47 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 496e4a312fcb4e188c28b44d27ba4111
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  16m (x5 over 16m)      kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x5 over 16m)      kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x5 over 16m)      kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                    kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                    kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                    kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                    kube-proxy  Starting kube-proxy.
	  Normal  Starting                 3m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s (x8 over 3m57s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x8 over 3m57s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x8 over 3m57s)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [1cf26e38ac1c6604c953475ca04f80ac9e1430c2d45615035dcca537258ed713] <==
	2025-04-01 20:39:14.351206 I | embed: serving client requests on 192.168.85.2:2379
	2025-04-01 20:39:14.351442 I | embed: serving client requests on 127.0.0.1:2379
	2025-04-01 20:39:27.418785 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:39:29.690078 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:39:39.690167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:39:49.690058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:39:59.690084 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:09.690223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:19.690072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:29.690112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:39.690042 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:49.690173 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:40:59.690104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:09.690293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:19.690130 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:29.690131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:39.690248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:49.690108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:41:59.690088 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:09.690097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:19.690072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:29.690087 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:39.690097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:49.690043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:42:59.690125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:43:08 up  1:25,  0 users,  load average: 1.27, 1.11, 1.56
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [476cadc498ed38467dee6e6bd14670115232b713370264319c7e5a56ecaeac7d] <==
	I0401 20:39:47.491400       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:39:47.491413       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:40:17.995545       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:40:17.995600       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:40:17.995610       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:40:19.854926       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:40:19.854995       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:40:19.855002       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:41:01.641802       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:41:01.641860       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:41:01.641871       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:41:34.545803       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:41:34.545851       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:41:34.545862       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:42:05.888158       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:42:05.888213       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:42:05.888224       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:42:19.855232       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:42:19.855334       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:42:19.855342       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:42:44.443814       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:42:44.443857       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:42:44.443865       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [a5bc89e701040e08d72357e3dac6043fa2051845c4876d8d4c98324eb1a2f4d5] <==
	E0401 20:39:35.758646       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0401 20:39:36.303257       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:39:36.306515       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0401 20:39:36.349430       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0401 20:39:36.349455       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0401 20:39:36.406745       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0401 20:40:05.508638       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:40:08.056737       1 request.go:655] Throttling request took 1.048414431s, request: GET:https://192.168.85.2:8443/apis/storage.k8s.io/v1?timeout=32s
	W0401 20:40:08.907950       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:40:36.010464       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:40:40.557838       1 request.go:655] Throttling request took 1.04862398s, request: GET:https://192.168.85.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0401 20:40:41.408975       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:41:06.511833       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:41:13.059138       1 request.go:655] Throttling request took 1.048384414s, request: GET:https://192.168.85.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0401 20:41:13.910100       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:41:37.013793       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:41:45.560025       1 request.go:655] Throttling request took 1.048522705s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0401 20:41:46.411210       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:42:07.515530       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:42:18.061465       1 request.go:655] Throttling request took 1.048460694s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0401 20:42:18.912716       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:42:38.016993       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:42:50.562641       1 request.go:655] Throttling request took 1.048575654s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0401 20:42:51.413970       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:43:08.518271       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [b6e2a15624e6bfb4518956b54ad139920c531d3fc7c23adccb5f26ae8087b4ae] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0401 20:39:19.459621       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:39:19.459730       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:39:19.469176       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:39:19.469267       1 server_others.go:185] Using iptables Proxier.
	I0401 20:39:19.469492       1 server.go:650] Version: v1.20.0
	I0401 20:39:19.469980       1 config.go:224] Starting endpoint slice config controller
	I0401 20:39:19.469997       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:39:19.470025       1 config.go:315] Starting service config controller
	I0401 20:39:19.470030       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:39:19.570148       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:39:19.570204       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [e1f3c07569c92c3a8447517fe4a29b9a1107cefce6ec8dec3438e2043596f976] <==
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0401 20:39:13.424195       1 serving.go:331] Generated self-signed cert in-memory
	W0401 20:39:17.235518       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:17.235651       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:17.235691       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:17.235733       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:17.536554       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0401 20:39:17.536892       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0401 20:39:17.537005       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.537056       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.642397       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:41:42 old-k8s-version-964633 kubelet[986]: E0401 20:41:42.008696     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:41:47 old-k8s-version-964633 kubelet[986]: E0401 20:41:47.009312     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:41:50 old-k8s-version-964633 kubelet[986]: E0401 20:41:50.991578     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:41:52 old-k8s-version-964633 kubelet[986]: E0401 20:41:52.009985     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:41:57 old-k8s-version-964633 kubelet[986]: E0401 20:41:57.010697     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:01 old-k8s-version-964633 kubelet[986]: E0401 20:42:01.991430     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:42:02 old-k8s-version-964633 kubelet[986]: E0401 20:42:02.011279     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:07 old-k8s-version-964633 kubelet[986]: E0401 20:42:07.011973     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:12 old-k8s-version-964633 kubelet[986]: E0401 20:42:12.012613     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:17 old-k8s-version-964633 kubelet[986]: E0401 20:42:17.013417     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:22 old-k8s-version-964633 kubelet[986]: E0401 20:42:22.014071     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:27 old-k8s-version-964633 kubelet[986]: E0401 20:42:27.014731     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:32 old-k8s-version-964633 kubelet[986]: E0401 20:42:32.015390     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:37 old-k8s-version-964633 kubelet[986]: E0401 20:42:37.016061     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:42 old-k8s-version-964633 kubelet[986]: E0401 20:42:42.016718     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:45 old-k8s-version-964633 kubelet[986]: E0401 20:42:45.752330     986 remote_image.go:113] PullImage "docker.io/kindest/kindnetd:v20250214-acbabc1a" from image service failed: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:42:45 old-k8s-version-964633 kubelet[986]: E0401 20:42:45.752393     986 kuberuntime_image.go:51] Pull image "docker.io/kindest/kindnetd:v20250214-acbabc1a" failed: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:42:45 old-k8s-version-964633 kubelet[986]: E0401 20:42:45.752566     986 kuberuntime_manager.go:829] container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPa
th:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kindnet-token-pbwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kin
dnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878): ErrImagePull: rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	Apr 01 20:42:45 old-k8s-version-964633 kubelet[986]: E0401 20:42:45.752610     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 01 20:42:47 old-k8s-version-964633 kubelet[986]: E0401 20:42:47.017318     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:52 old-k8s-version-964633 kubelet[986]: E0401 20:42:52.018133     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:57 old-k8s-version-964633 kubelet[986]: E0401 20:42:57.018918     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:42:57 old-k8s-version-964633 kubelet[986]: E0401 20:42:57.991295     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:43:02 old-k8s-version-964633 kubelet[986]: E0401 20:43:02.019546     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:43:07 old-k8s-version-964633 kubelet[986]: E0401 20:43:07.020316     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
E0401 20:43:09.192926   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1 (69.017095ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-5nmbk:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-5nmbk
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  3m49s (x1 over 3m51s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
	  Warning  FailedScheduling  4m18s (x10 over 12m)   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "metrics-server-9975d5f86-vj6lt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-4cckx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd95d586-p4fvg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (256.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (250.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
E0401 20:39:07.013349   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:39:29.791724   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:39:53.251803   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:39:56.735828   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:40:37.514853   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:40:37.711005   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: exit status 80 (4m8.492270135s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Restarting existing docker container for "default-k8s-diff-port-993330" ...
	* Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	* 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-993330 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353427,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:54.287928611Z",
	            "FinishedAt": "2025-04-01T20:38:53.06055829Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec09fa1a9496e05123b7a54f35ba87b679a89f15a6b0677344788b51903d4cb2",
	            "SandboxKey": "/var/run/docker/netns/ec09fa1a9496",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:be:99:3d:93:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "5aaf086e3c391b2394b006ad5aca69dfaf955cf2259cb4d42342fb401f46a6a2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.014730016s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:39:07 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:07.330616602Z" level=info msg="Started container" PID=1178 containerID=f01b95ee70b78d448bb8f831dc34b6c7ae96d0ccbdce6b18c2c076cbba24760e description=kube-system/kube-proxy-btnmc/kube-proxy id=30487a4e-403e-438c-aeb6-fe1094e6e5b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c991b896744f3aec0844e91a84ef51ed12dfaa4aaef7bab0e29e4fe1a4601b5e
	Apr 01 20:39:39 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:39.801491358Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4514d84f-40a3-4595-9f10-4344f36f1bbf name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:39 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:39.801806267Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4514d84f-40a3-4595-9f10-4344f36f1bbf name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:50.644716616Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=9fae8350-5e14-49ba-b5b4-6c6fafb2a365 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:50.644978766Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=9fae8350-5e14-49ba-b5b4-6c6fafb2a365 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:39:50 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:50.645548950Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cc3aa225-75ed-4f21-99c2-ceb284e17aee name=/runtime.v1.ImageService/PullImage
	Apr 01 20:39:50 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:39:50.646726024Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:40:34 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:34.644358733Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=03bc9b13-83d8-4140-b592-411f8e688043 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:34 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:34.644633929Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=03bc9b13-83d8-4140-b592-411f8e688043 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:46.644600605Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5a815c91-65d6-47d9-a201-9ecf841f8618 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:46.644856880Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5a815c91-65d6-47d9-a201-9ecf841f8618 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:40:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:46.645377188Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=93cb942e-d83f-4b86-87ea-674f9c019ef9 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:40:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:40:46.646560709Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:41:33 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:41:33.644500311Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=fba58e42-2f29-490f-90b8-de284b24c082 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:33 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:41:33.644785197Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=fba58e42-2f29-490f-90b8-de284b24c082 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:48 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:41:48.644450014Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3c53007c-53a6-4928-a54b-0c2613366fba name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:41:48 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:41:48.645860151Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3c53007c-53a6-4928-a54b-0c2613366fba name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:01 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:01.644079600Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2826aef3-2168-4a13-9281-58162f0b5154 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:01 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:01.644398314Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2826aef3-2168-4a13-9281-58162f0b5154 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:01 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:01.644894853Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=06565780-af00-41d5-902d-8b48c942dc40 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:42:01 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:01.655065583Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:42:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:46.644307312Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=792e5c0b-6ad8-4c36-9d07-dbddebd7b32b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:46 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:46.644576740Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=792e5c0b-6ad8-4c36-9d07-dbddebd7b32b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:59 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:59.644460846Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=44d76905-9534-4ee4-ae29-f2e594b84d38 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:42:59 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:42:59.644799712Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=44d76905-9534-4ee4-ae29-f2e594b84d38 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f01b95ee70b78       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   3 minutes ago       Running             kube-proxy                1                   c991b896744f3       kube-proxy-btnmc
	65a195d0c0eee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   4 minutes ago       Running             kube-scheduler            1                   c122dcfc3b396       kube-scheduler-default-k8s-diff-port-993330
	3fc5e3c8360ed       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   4 minutes ago       Running             kube-apiserver            1                   ed07a91d341b7       kube-apiserver-default-k8s-diff-port-993330
	359dfdc6cc6fc       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   4 minutes ago       Running             kube-controller-manager   1                   5aa5cbe680b17       kube-controller-manager-default-k8s-diff-port-993330
	97f8ee6669267       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   4 minutes ago       Running             etcd                      1                   81f7f6b1c2968       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:39:06 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:39:06 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:39:06 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:39:06 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 a059a387258444c8a5d2ccbb6a4f4f0c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 3m55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)    kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)    kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)    kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                  node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	  Normal   Starting                 4m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m2s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m2s (x8 over 4m2s)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m54s                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [97f8ee6669267ad80232ce8bf71fc941954cb5cbcd412ad8213873a5a511b38b] <==
	{"level":"info","ts":"2025-04-01T20:39:02.920044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-01T20:39:02.920053Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-01T20:39:02.920742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-04-01T20:39:02.920834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-04-01T20:39:02.920931Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:39:02.920955Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:39:02.920963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:02.920994Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:04.749608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.750727Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:39:04.750738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.750768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.751743Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.752148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:04.752611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.753116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:38.126747Z","caller":"traceutil/trace.go:171","msg":"trace[1345586840] transaction","detail":"{read_only:false; response_revision:853; number_of_response:1; }","duration":"118.996467ms","start":"2025-04-01T20:39:38.007727Z","end":"2025-04-01T20:39:38.126724Z","steps":["trace[1345586840] 'process raft request'  (duration: 56.085101ms)","trace[1345586840] 'compare'  (duration: 62.811604ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:43:03 up  1:25,  0 users,  load average: 1.20, 1.10, 1.56
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3fc5e3c8360edb7984be32faf8eef372adf72360ea8d96ce692122c037453681] <==
	I0401 20:39:07.962584       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0401 20:39:08.023894       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0401 20:39:08.128401       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.18.56"}
	I0401 20:39:08.146384       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.249.195"}
	I0401 20:39:09.509593       1 controller.go:615] quota admission added evaluator for: endpoints
	I0401 20:39:09.660470       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0401 20:39:09.760220       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0401 20:40:07.228502       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:40:07.228606       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:40:07.229729       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:40:07.233949       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:40:07.234017       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:40:07.235135       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:42:07.230734       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:42:07.230833       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:42:07.231952       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:42:07.235286       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:42:07.235326       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:42:07.236473       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [359dfdc6cc6fc25f3136a3577c905adb20d4762ca289cc023c7aa3e8c0221998] <==
	I0401 20:39:09.319373       1 shared_informer.go:320] Caches are synced for crt configmap
	I0401 20:39:09.322647       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:39:09.322666       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0401 20:39:09.322673       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0401 20:39:09.327890       1 shared_informer.go:320] Caches are synced for garbage collector
	I0401 20:39:09.921831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="158.364746ms"
	I0401 20:39:09.925287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="161.630074ms"
	I0401 20:39:09.935175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="13.176994ms"
	I0401 20:39:09.935338       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="51.12µs"
	I0401 20:39:09.937856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="12.441524ms"
	I0401 20:39:09.937947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-86c6bf9756" duration="50.478µs"
	E0401 20:39:39.315939       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:39:39.334147       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:09.322315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:09.341315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:40:39.327773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:40:39.348844       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:09.333512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:09.355658       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:41:39.338413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:41:39.361364       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:09.344604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:09.368219       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:42:39.349649       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:42:39.375285       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [f01b95ee70b78d448bb8f831dc34b6c7ae96d0ccbdce6b18c2c076cbba24760e] <==
	I0401 20:39:07.540137       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:07.958690       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:39:07.959920       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:08.054675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:08.055270       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:08.058888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:08.059395       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:08.059435       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:08.060790       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:08.060804       1 config.go:199] "Starting service config controller"
	I0401 20:39:08.060830       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:08.060832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:08.061405       1 config.go:329] "Starting node config controller"
	I0401 20:39:08.061423       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:08.160990       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:08.160982       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:08.161646       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [65a195d0c0eee552be400b60ac82ad3be750b1213af7968bc93e67d39c09622b] <==
	I0401 20:39:03.764615       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:06.018090       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:06.042164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:06.042298       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:06.042343       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:06.146155       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:06.146255       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.153712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:06.156339       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:06.161882       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:06.158746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:06.263913       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:42:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:21.658443     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540141658256207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:21.658484     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540141658256207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:21.681202     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:26 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:26.682043     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:31.659541     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540151659264145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:31.659583     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540151659264145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:31.682822     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:33 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:33.772599     668 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:33 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:33.772664     668 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Apr 01 20:42:33 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:33.772801     668 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]Volu
meMount{VolumeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},St
din:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-9xbmt_kube-system(68b2c7ae-356c-49af-994e-ada27ca91c66): ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 01 20:42:33 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:33.773991     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:42:36 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:36.685344     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:41.660444     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540161660245145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:41.660485     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540161660245145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:41.686499     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:46 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:46.644897     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:42:46 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:46.687535     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:51.661825     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540171661604689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:51.661871     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540171661604689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:42:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:51.688726     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:56 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:56.690210     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:42:59 default-k8s-diff-port-993330 kubelet[668]: E0401 20:42:59.645089     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:43:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:43:01.662817     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540181662638623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:43:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:43:01.662861     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540181662638623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:43:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:43:01.691802     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1 (71.692809ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7wrpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  6m53s (x2 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  3m58s                default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-998nd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-dskhc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-rwzdk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (250.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-d2blk" [7a49c269-ae5f-4a52-b427-720736dc552d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:51:57.802036046 +0000 UTC m=+4003.402967490
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-671514 describe po kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-671514 describe po kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-d2blk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             <none>
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Image:      docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dl9l6 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-dl9l6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  12m                    default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning  FailedScheduling  2m29s (x2 over 7m29s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-671514 logs kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-671514 logs kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard:
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347539,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:47.214891198Z",
	            "FinishedAt": "2025-04-01T20:38:46.056346181Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bbc852e72936fcd498ad1c3a51d7c1f88352c6a93862744e1874c53a1007c0b",
	            "SandboxKey": "/var/run/docker/netns/5bbc852e7293",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:42:07:e3:85:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "3e43b7030559efe8587100f9aafe4e5d830bd7b517b3927b0b1dddcdf10d9cd5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-671514 logs -n 25: (1.02294914s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:49:20 no-preload-671514 crio[550]: time="2025-04-01 20:49:20.555919045Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c22ac19b-4224-4918-b9c3-8d572657b1f8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:32 no-preload-671514 crio[550]: time="2025-04-01 20:49:32.555149792Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7aa6c5d2-2916-4375-9d27-959be9e125e9 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:32 no-preload-671514 crio[550]: time="2025-04-01 20:49:32.555449660Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7aa6c5d2-2916-4375-9d27-959be9e125e9 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:44 no-preload-671514 crio[550]: time="2025-04-01 20:49:44.555158439Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=01facfc4-b5c7-4a46-ae1b-9b70c6bb035c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:44 no-preload-671514 crio[550]: time="2025-04-01 20:49:44.555482154Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=01facfc4-b5c7-4a46-ae1b-9b70c6bb035c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:59 no-preload-671514 crio[550]: time="2025-04-01 20:49:59.555432723Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cce63e42-b2a1-4e8c-a149-466d41182bc0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:59 no-preload-671514 crio[550]: time="2025-04-01 20:49:59.555718103Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cce63e42-b2a1-4e8c-a149-466d41182bc0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:14 no-preload-671514 crio[550]: time="2025-04-01 20:50:14.555434858Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6b4cfdd6-0216-4818-b735-94d4393f4c4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:14 no-preload-671514 crio[550]: time="2025-04-01 20:50:14.555708682Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6b4cfdd6-0216-4818-b735-94d4393f4c4e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:27 no-preload-671514 crio[550]: time="2025-04-01 20:50:27.555169081Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=63878608-f3da-493f-b86d-cdaad9cd7b60 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:27 no-preload-671514 crio[550]: time="2025-04-01 20:50:27.555569287Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=63878608-f3da-493f-b86d-cdaad9cd7b60 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:42 no-preload-671514 crio[550]: time="2025-04-01 20:50:42.555329071Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=654cd6a4-d2cd-46c7-9bed-796bb22b3bea name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:42 no-preload-671514 crio[550]: time="2025-04-01 20:50:42.555706816Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=654cd6a4-d2cd-46c7-9bed-796bb22b3bea name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:54 no-preload-671514 crio[550]: time="2025-04-01 20:50:54.555265928Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a824acef-2e5e-4ed5-8dec-7e001fc9711c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:54 no-preload-671514 crio[550]: time="2025-04-01 20:50:54.555524980Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a824acef-2e5e-4ed5-8dec-7e001fc9711c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:07 no-preload-671514 crio[550]: time="2025-04-01 20:51:07.555575244Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=eab957c7-dce4-42af-beee-48141d53cc45 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:07 no-preload-671514 crio[550]: time="2025-04-01 20:51:07.555885660Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=eab957c7-dce4-42af-beee-48141d53cc45 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:19 no-preload-671514 crio[550]: time="2025-04-01 20:51:19.555354097Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4589a976-8797-4bb3-9ac3-cc7074260305 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:19 no-preload-671514 crio[550]: time="2025-04-01 20:51:19.555586825Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4589a976-8797-4bb3-9ac3-cc7074260305 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:31 no-preload-671514 crio[550]: time="2025-04-01 20:51:31.555452749Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=965f6343-27b3-460d-afd5-d43eaae2692d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:31 no-preload-671514 crio[550]: time="2025-04-01 20:51:31.555769161Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=965f6343-27b3-460d-afd5-d43eaae2692d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:42 no-preload-671514 crio[550]: time="2025-04-01 20:51:42.555449043Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=fd7dd81c-0ed3-45d4-9a44-e38c426433a2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:42 no-preload-671514 crio[550]: time="2025-04-01 20:51:42.555732119Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=fd7dd81c-0ed3-45d4-9a44-e38c426433a2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:57 no-preload-671514 crio[550]: time="2025-04-01 20:51:57.555375092Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2da7614b-a5e5-4bad-bc0a-abf9f5e38364 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:57 no-preload-671514 crio[550]: time="2025-04-01 20:51:57.555658895Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2da7614b-a5e5-4bad-bc0a-abf9f5e38364 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea145bd33786b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                1                   ce01896c90f77       kube-proxy-pfvch
	ee48c6782a18b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   13 minutes ago      Running             kube-apiserver            1                   56ea918890fe0       kube-apiserver-no-preload-671514
	c433696fcee19       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   13 minutes ago      Running             kube-controller-manager   1                   84d0bba648e43       kube-controller-manager-no-preload-671514
	b1d13381b02cc       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   13 minutes ago      Running             kube-scheduler            1                   b988612136b4f       kube-scheduler-no-preload-671514
	c26ee68cb1e41       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   13 minutes ago      Running             etcd                      1                   aba801a800b41       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:51:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 607874eb563c47059868a4160125dbb6
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientPID     25m                kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 25m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  25m                kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m                kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 25m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           25m                node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [c26ee68cb1e41434cb1773276a80f9b07dd93b734f39daae74d2886e50d29ba0] <==
	{"level":"info","ts":"2025-04-01T20:38:55.525537Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-04-01T20:38:55.525733Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-04-01T20:38:55.526485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:38:55.526538Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:38:57.022450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.023544Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:38:57.023604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.023936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.024487Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.024568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.025105Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:38:57.025225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:00.430539Z","caller":"traceutil/trace.go:171","msg":"trace[280012238] transaction","detail":"{read_only:false; response_revision:772; number_of_response:1; }","duration":"101.218224ms","start":"2025-04-01T20:39:00.329302Z","end":"2025-04-01T20:39:00.430521Z","steps":["trace[280012238] 'process raft request'  (duration: 46.826091ms)","trace[280012238] 'compare'  (duration: 54.291765ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:00.548330Z","caller":"traceutil/trace.go:171","msg":"trace[1807709246] transaction","detail":"{read_only:false; response_revision:773; number_of_response:1; }","duration":"108.767351ms","start":"2025-04-01T20:39:00.439528Z","end":"2025-04-01T20:39:00.548295Z","steps":["trace[1807709246] 'process raft request'  (duration: 96.291629ms)","trace[1807709246] 'compare'  (duration: 12.091718ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:48:57.043906Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":940}
	{"level":"info","ts":"2025-04-01T20:48:57.048190Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":940,"took":"3.988148ms","hash":4160464570,"current-db-size-bytes":1757184,"current-db-size":"1.8 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-04-01T20:48:57.048225Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4160464570,"revision":940,"compact-revision":505}
	
	
	==> kernel <==
	 20:51:59 up  1:34,  0 users,  load average: 0.23, 0.40, 1.00
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ee48c6782a18ba4755d82a0a5bf1ad1b855dfd1d70fdd7295d33e8a88f8775d5] <==
	I0401 20:46:59.133679       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:46:59.133712       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:48:58.133729       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:48:58.133864       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0401 20:48:59.135273       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:48:59.135295       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:48:59.135335       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:48:59.135372       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:48:59.136463       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:48:59.136487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:49:59.137031       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:49:59.137031       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:49:59.137149       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0401 20:49:59.137195       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:49:59.138280       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:49:59.138299       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c433696fcee19b99e87b3d9433f8add31e3b93cb7663068ef9be96761a9725fd] <==
	I0401 20:46:02.460397       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:46:26.495017       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	E0401 20:46:32.401685       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:46:32.466659       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:02.407595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:02.473943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:32.413370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:32.481214       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:02.418788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:02.488098       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:32.423626       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:32.496097       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:49:02.429889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:02.502985       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:49:32.435478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:32.509355       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:02.441322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:02.516585       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:32.447216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:32.523449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:02.452701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:02.530630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:32.458468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:32.537332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:51:33.268662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	
	
	==> kube-proxy [ea145bd33786beab5695edea53c4427b5de9ac7e59c201cefdd36226f43e54ca] <==
	I0401 20:38:59.352570       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:38:59.739049       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:38:59.739232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:38:59.932876       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:38:59.932949       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:38:59.936073       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:38:59.936478       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:38:59.936515       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:59.939364       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:00.018698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:38:59.961970       1 config.go:199] "Starting service config controller"
	I0401 20:39:00.018788       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:38:59.963606       1 config.go:329] "Starting node config controller"
	I0401 20:39:00.018803       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:00.121850       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:39:00.121958       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:00.122020       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b1d13381b02cc94d594efb9905918a3d246d7722a4c6dbc1796409ac561c2e3d] <==
	I0401 20:38:56.385160       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:38:58.139246       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:38:58.139285       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:38:58.139315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:38:58.139326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:38:58.244037       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:38:58.244065       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:58.245973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:38:58.246009       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:38:58.246168       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:38:58.246306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:38:58.348872       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:51:07 no-preload-671514 kubelet[663]: E0401 20:51:07.556193     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:51:09 no-preload-671514 kubelet[663]: E0401 20:51:09.715131     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:14 no-preload-671514 kubelet[663]: E0401 20:51:14.635640     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540674635437710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:14 no-preload-671514 kubelet[663]: E0401 20:51:14.635682     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540674635437710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:14 no-preload-671514 kubelet[663]: E0401 20:51:14.716109     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:19 no-preload-671514 kubelet[663]: E0401 20:51:19.555897     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:51:19 no-preload-671514 kubelet[663]: E0401 20:51:19.717233     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:24 no-preload-671514 kubelet[663]: E0401 20:51:24.636554     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540684636389737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:24 no-preload-671514 kubelet[663]: E0401 20:51:24.636598     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540684636389737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:24 no-preload-671514 kubelet[663]: E0401 20:51:24.718680     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:29 no-preload-671514 kubelet[663]: E0401 20:51:29.719306     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:31 no-preload-671514 kubelet[663]: E0401 20:51:31.556077     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:51:34 no-preload-671514 kubelet[663]: E0401 20:51:34.637502     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540694637313452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:34 no-preload-671514 kubelet[663]: E0401 20:51:34.637543     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540694637313452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:34 no-preload-671514 kubelet[663]: E0401 20:51:34.719982     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:39 no-preload-671514 kubelet[663]: E0401 20:51:39.721306     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:42 no-preload-671514 kubelet[663]: E0401 20:51:42.556004     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:51:44 no-preload-671514 kubelet[663]: E0401 20:51:44.638506     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540704638301061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:44 no-preload-671514 kubelet[663]: E0401 20:51:44.638541     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540704638301061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:44 no-preload-671514 kubelet[663]: E0401 20:51:44.722359     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:49 no-preload-671514 kubelet[663]: E0401 20:51:49.724073     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:54 no-preload-671514 kubelet[663]: E0401 20:51:54.639535     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540714639331931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:54 no-preload-671514 kubelet[663]: E0401 20:51:54.639578     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540714639331931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:54 no-preload-671514 kubelet[663]: E0401 20:51:54.724850     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:57 no-preload-671514 kubelet[663]: E0401 20:51:57.555983     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: loading manifest for target platform: reading manifest sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97 in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1 (69.814125ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hxxvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  8m1s (x2 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  15m (x2 over 21m)   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-28pk4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-nmk5v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-d2blk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rwzdk" [b25763a9-af09-4aa5-b4e1-eefefa2ff944] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
E0401 20:43:05.583257   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:52:04.695933805 +0000 UTC m=+4010.296865239
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe po kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-993330 describe po kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-rwzdk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             <none>
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Image:      docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5nqj8 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-5nqj8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                  From               Message
----     ------            ----                 ----               -------
Warning  FailedScheduling  2m28s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 logs kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-993330 logs kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard:
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353427,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:54.287928611Z",
	            "FinishedAt": "2025-04-01T20:38:53.06055829Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec09fa1a9496e05123b7a54f35ba87b679a89f15a6b0677344788b51903d4cb2",
	            "SandboxKey": "/var/run/docker/netns/ec09fa1a9496",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:be:99:3d:93:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "5aaf086e3c391b2394b006ad5aca69dfaf955cf2259cb4d42342fb401f46a6a2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.008924198s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:49:17 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:17.644220618Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cb8575b7-6929-4dd3-837f-d0fe18d4e81a name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:28 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:28.644288891Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3257b757-2125-4d2b-a259-c386fe989790 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:28 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:28.644558151Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3257b757-2125-4d2b-a259-c386fe989790 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:40 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:40.644629245Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d74560dc-56db-4920-8d59-2faceb3a45c5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:40 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:40.644929738Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d74560dc-56db-4920-8d59-2faceb3a45c5 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:54 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:54.644061734Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a89b44eb-edcd-47df-9059-8e1f9e647292 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:54 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:49:54.644296606Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a89b44eb-edcd-47df-9059-8e1f9e647292 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:08 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:08.644002891Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0a310381-f85a-4327-80cc-316bdc671422 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:08 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:08.644250920Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0a310381-f85a-4327-80cc-316bdc671422 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:22 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:22.644929322Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7ee67c95-7cff-43de-aef1-c7d15e611d70 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:22 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:22.645150591Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7ee67c95-7cff-43de-aef1-c7d15e611d70 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:36 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:36.644072747Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d7f7419d-ab12-4981-ba17-0d6ca5617a11 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:36 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:36.644349836Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d7f7419d-ab12-4981-ba17-0d6ca5617a11 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:51 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:51.644377537Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2bf09121-8610-4aa5-810e-e0f1bd5690d6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:51 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:50:51.644644177Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2bf09121-8610-4aa5-810e-e0f1bd5690d6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:02 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:02.644474982Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=b2b97d44-b8ce-4697-8278-d63a816a36b8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:02 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:02.644693337Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=b2b97d44-b8ce-4697-8278-d63a816a36b8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:16 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:16.644885847Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cc689d48-08c2-4f8b-ab7e-784550380cb2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:16 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:16.645178853Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cc689d48-08c2-4f8b-ab7e-784550380cb2 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:28 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:28.644491703Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0683d8a9-80da-4223-8a5f-5e24d707a600 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:28 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:28.644730115Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0683d8a9-80da-4223-8a5f-5e24d707a600 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:43.644346679Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=9e89b187-838c-49e8-8a52-287bff3ea099 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:43.644610946Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=9e89b187-838c-49e8-8a52-287bff3ea099 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:54 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:54.644364684Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a42bc86e-ead7-4711-8238-b6507b8ba867 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:54 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:51:54.644594761Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a42bc86e-ead7-4711-8238-b6507b8ba867 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f01b95ee70b78       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   12 minutes ago      Running             kube-proxy                1                   c991b896744f3       kube-proxy-btnmc
	65a195d0c0eee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   13 minutes ago      Running             kube-scheduler            1                   c122dcfc3b396       kube-scheduler-default-k8s-diff-port-993330
	3fc5e3c8360ed       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   13 minutes ago      Running             kube-apiserver            1                   ed07a91d341b7       kube-apiserver-default-k8s-diff-port-993330
	359dfdc6cc6fc       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   13 minutes ago      Running             kube-controller-manager   1                   5aa5cbe680b17       kube-controller-manager-default-k8s-diff-port-993330
	97f8ee6669267       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   13 minutes ago      Running             etcd                      1                   81f7f6b1c2968       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:52:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:49:38 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:49:38 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:49:38 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:49:38 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 a059a387258444c8a5d2ccbb6a4f4f0c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25m (x8 over 25m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    25m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 25m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  25m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     25m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 25m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           25m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [97f8ee6669267ad80232ce8bf71fc941954cb5cbcd412ad8213873a5a511b38b] <==
	{"level":"info","ts":"2025-04-01T20:39:02.920834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-04-01T20:39:02.920931Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:39:02.920955Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-04-01T20:39:02.920963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:02.920994Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:04.749608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.750727Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:39:04.750738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.750768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.751743Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.752148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:04.752611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.753116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:38.126747Z","caller":"traceutil/trace.go:171","msg":"trace[1345586840] transaction","detail":"{read_only:false; response_revision:853; number_of_response:1; }","duration":"118.996467ms","start":"2025-04-01T20:39:38.007727Z","end":"2025-04-01T20:39:38.126724Z","steps":["trace[1345586840] 'process raft request'  (duration: 56.085101ms)","trace[1345586840] 'compare'  (duration: 62.811604ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:49:04.766909Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":971}
	{"level":"info","ts":"2025-04-01T20:49:04.771402Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":971,"took":"4.247389ms","hash":3246169667,"current-db-size-bytes":1921024,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1921024,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-04-01T20:49:04.771435Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3246169667,"revision":971,"compact-revision":537}
	
	
	==> kernel <==
	 20:52:05 up  1:34,  0 users,  load average: 0.19, 0.38, 0.98
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3fc5e3c8360edb7984be32faf8eef372adf72360ea8d96ce692122c037453681] <==
	I0401 20:47:07.150405       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:47:07.152407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:49:06.148360       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:49:06.148471       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0401 20:49:07.151315       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:49:07.151360       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0401 20:49:07.151424       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:49:07.151513       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:49:07.152490       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:49:07.152540       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:50:07.153004       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:50:07.153007       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:50:07.153077       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0401 20:50:07.153078       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:50:07.154194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:50:07.154207       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [359dfdc6cc6fc25f3136a3577c905adb20d4762ca289cc023c7aa3e8c0221998] <==
	E0401 20:46:09.390595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:46:09.421608       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:46:39.396086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:46:39.427901       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:09.401153       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:09.435429       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:39.405983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:39.442130       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:09.411470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:09.449374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:39.417962       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:39.456139       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:49:09.423705       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:09.463494       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:49:38.101071       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	E0401 20:49:39.428778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:39.470269       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:09.433661       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:09.477225       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:39.438539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:39.484803       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:09.444293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:09.491313       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:39.449403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:39.498659       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [f01b95ee70b78d448bb8f831dc34b6c7ae96d0ccbdce6b18c2c076cbba24760e] <==
	I0401 20:39:07.540137       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:07.958690       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:39:07.959920       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:08.054675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:08.055270       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:08.058888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:08.059395       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:08.059435       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:08.060790       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:08.060804       1 config.go:199] "Starting service config controller"
	I0401 20:39:08.060830       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:08.060832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:08.061405       1 config.go:329] "Starting node config controller"
	I0401 20:39:08.061423       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:08.160990       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:08.160982       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:08.161646       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [65a195d0c0eee552be400b60ac82ad3be750b1213af7968bc93e67d39c09622b] <==
	I0401 20:39:03.764615       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:06.018090       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:06.042164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:06.042298       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:06.042343       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:06.146155       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:06.146255       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.153712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:06.156339       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:06.161882       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:06.158746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:06.263913       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:51:11 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:11.808854     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:16 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:16.645567     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:51:16 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:16.809872     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:21.723686     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540681723501489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:21.723729     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540681723501489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:21 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:21.811014     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:26 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:26.812412     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:28 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:28.645070     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:51:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:31.724620     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540691724438534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:31.724666     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540691724438534,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:31.813811     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:36 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:36.814671     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:41.725641     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540701725457286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:41.725681     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540701725457286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:41.816297     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:43 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:43.644846     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:51:46 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:46.816922     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:51.726723     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540711726499863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:51.726768     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540711726499863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:51.818229     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:54 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:54.644882     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:51:56 default-k8s-diff-port-993330 kubelet[668]: E0401 20:51:56.819468     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:52:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:52:01.727684     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540721727479122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:52:01.727724     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540721727479122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:52:01.820786     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1 (70.873266ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7wrpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15m (x2 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  3m (x3 over 13m)   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-998nd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-dskhc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-rwzdk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-q2fjx" [6ed5edcd-f3a9-4177-bc48-6176cfd8c20d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:52:09.085268322 +0000 UTC m=+4014.686199767
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-974821 describe po kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-974821 describe po kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard:
Name:             kubernetes-dashboard-7779f9b69b-q2fjx
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             <none>
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=7779f9b69b
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/kubernetes-dashboard-7779f9b69b
Containers:
kubernetes-dashboard:
Image:      docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2dtv9 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-2dtv9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  12m                    default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning  FailedScheduling  2m34s (x2 over 7m34s)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-974821 logs kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-974821 logs kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard:
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.286446875Z",
	            "FinishedAt": "2025-04-01T20:38:52.118073098Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a400a933eabcb680d1a6c739c40c6e1e691bc1d846119585a6bea14a4faf054",
	            "SandboxKey": "/var/run/docker/netns/3a400a933eab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:df:19:aa:43:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "fcd49a1d7a931c51670bb1639475ceebb2f5e6078df77f57455465bfc6426ab5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25: (1.121972515s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:49:26 embed-certs-974821 crio[550]: time="2025-04-01 20:49:26.274189101Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a3f0ce09-d75e-4d2d-9496-0eccc4579cef name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:40 embed-certs-974821 crio[550]: time="2025-04-01 20:49:40.273440897Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=61bfd576-e3b6-4e84-8d84-b54a0b4a19ff name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:40 embed-certs-974821 crio[550]: time="2025-04-01 20:49:40.273743648Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=61bfd576-e3b6-4e84-8d84-b54a0b4a19ff name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:54 embed-certs-974821 crio[550]: time="2025-04-01 20:49:54.273644833Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4773faa7-cf4c-4e86-b2e7-123303eaa611 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:49:54 embed-certs-974821 crio[550]: time="2025-04-01 20:49:54.273953491Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4773faa7-cf4c-4e86-b2e7-123303eaa611 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:07 embed-certs-974821 crio[550]: time="2025-04-01 20:50:07.273452898Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1f437b0a-f6dc-4b60-b0db-0177112eb0ea name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:07 embed-certs-974821 crio[550]: time="2025-04-01 20:50:07.273769053Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1f437b0a-f6dc-4b60-b0db-0177112eb0ea name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:19 embed-certs-974821 crio[550]: time="2025-04-01 20:50:19.274135290Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=cacf57a4-4f6d-4719-93f2-c568efcdf616 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:19 embed-certs-974821 crio[550]: time="2025-04-01 20:50:19.274378543Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=cacf57a4-4f6d-4719-93f2-c568efcdf616 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:32 embed-certs-974821 crio[550]: time="2025-04-01 20:50:32.273766509Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=9655ded9-bc06-4164-8e9d-7c3054d4a198 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:32 embed-certs-974821 crio[550]: time="2025-04-01 20:50:32.274029649Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=9655ded9-bc06-4164-8e9d-7c3054d4a198 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:44 embed-certs-974821 crio[550]: time="2025-04-01 20:50:44.273506608Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e9236ae3-cf2a-4bd9-841a-e31fffef8738 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:44 embed-certs-974821 crio[550]: time="2025-04-01 20:50:44.273787028Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e9236ae3-cf2a-4bd9-841a-e31fffef8738 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:55 embed-certs-974821 crio[550]: time="2025-04-01 20:50:55.274414012Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=432fc479-0d8f-4302-ae8b-efc6d222bfa0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:50:55 embed-certs-974821 crio[550]: time="2025-04-01 20:50:55.274625207Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=432fc479-0d8f-4302-ae8b-efc6d222bfa0 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:06 embed-certs-974821 crio[550]: time="2025-04-01 20:51:06.275001567Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=58e08cb4-8849-4a24-91b6-423e59933feb name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:06 embed-certs-974821 crio[550]: time="2025-04-01 20:51:06.275204660Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=58e08cb4-8849-4a24-91b6-423e59933feb name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:19 embed-certs-974821 crio[550]: time="2025-04-01 20:51:19.274060993Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=15377602-238e-4617-9e9e-aa42c587b297 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:19 embed-certs-974821 crio[550]: time="2025-04-01 20:51:19.274283560Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=15377602-238e-4617-9e9e-aa42c587b297 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:34 embed-certs-974821 crio[550]: time="2025-04-01 20:51:34.273941227Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=55deaa23-a313-4b33-a568-4145e9e0738d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:34 embed-certs-974821 crio[550]: time="2025-04-01 20:51:34.274253812Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=55deaa23-a313-4b33-a568-4145e9e0738d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:48 embed-certs-974821 crio[550]: time="2025-04-01 20:51:48.273492174Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=25a1c7fd-41d8-4d34-9b8f-53bff1d84cfc name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:51:48 embed-certs-974821 crio[550]: time="2025-04-01 20:51:48.273809408Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=25a1c7fd-41d8-4d34-9b8f-53bff1d84cfc name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:52:02 embed-certs-974821 crio[550]: time="2025-04-01 20:52:02.273986893Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=779cbd9e-b015-4ae8-a168-fd1736b984ae name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:52:02 embed-certs-974821 crio[550]: time="2025-04-01 20:52:02.274251897Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=779cbd9e-b015-4ae8-a168-fd1736b984ae name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c4be69226b22       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   13 minutes ago      Running             kube-proxy                1                   054a48bf8a57c       kube-proxy-gn6mh
	6709f6284d476       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   13 minutes ago      Running             kube-controller-manager   1                   68166a16e4ccf       kube-controller-manager-embed-certs-974821
	1b409b776938c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   13 minutes ago      Running             kube-apiserver            1                   5a3a166087255       kube-apiserver-embed-certs-974821
	a9f1f681f3bf4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   13 minutes ago      Running             kube-scheduler            1                   4fb08364de8f4       kube-scheduler-embed-certs-974821
	732a4bf5b37a1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   13 minutes ago      Running             etcd                      1                   d8b5cef371e62       etcd-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:52:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:49:57 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:49:57 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:49:57 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:49:57 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 28ebfe595ec94fb9a75839c7c4da9d65
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 25m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     25m                kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    25m                kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  25m                kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25m                node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [732a4bf5b37a17d64428372c4b341ca0176e303c278397947fc37e81f445b747] <==
	{"level":"info","ts":"2025-04-01T20:39:03.345939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:03.345955Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:03.347047Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:03.347143Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:03.348433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:03.347178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348580Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-04-01T20:39:04.920589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.306335ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571761152512035446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:665 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.94.2\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-04-01T20:39:04.921414Z","caller":"traceutil/trace.go:171","msg":"trace[478374922] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"174.148343ms","start":"2025-04-01T20:39:04.747247Z","end":"2025-04-01T20:39:04.921396Z","steps":["trace[478374922] 'process raft request'  (duration: 174.071396ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921615Z","caller":"traceutil/trace.go:171","msg":"trace[1294899020] linearizableReadLoop","detail":"{readStateIndex:873; appliedIndex:872; }","duration":"174.902577ms","start":"2025-04-01T20:39:04.746663Z","end":"2025-04-01T20:39:04.921566Z","steps":["trace[1294899020] 'read index received'  (duration: 981.565µs)","trace[1294899020] 'applied index is now lower than readState.Index'  (duration: 173.918021ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:04.921658Z","caller":"traceutil/trace.go:171","msg":"trace[1643816995] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"174.752569ms","start":"2025-04-01T20:39:04.746898Z","end":"2025-04-01T20:39:04.921650Z","steps":["trace[1643816995] 'process raft request'  (duration: 174.347461ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921801Z","caller":"traceutil/trace.go:171","msg":"trace[214304335] transaction","detail":"{read_only:false; number_of_response:1; response_revision:699; }","duration":"175.517874ms","start":"2025-04-01T20:39:04.746273Z","end":"2025-04-01T20:39:04.921791Z","steps":["trace[214304335] 'compare'  (duration: 172.157301ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.921867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.179491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-974821\" limit:1 ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2025-04-01T20:39:04.922390Z","caller":"traceutil/trace.go:171","msg":"trace[1175626099] range","detail":"{range_begin:/registry/minions/embed-certs-974821; range_end:; response_count:1; response_revision:701; }","duration":"175.735808ms","start":"2025-04-01T20:39:04.746639Z","end":"2025-04-01T20:39:04.922375Z","steps":["trace[1175626099] 'agreement among raft nodes before linearized reading'  (duration: 175.172297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.922892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.707137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:92298"}
	{"level":"info","ts":"2025-04-01T20:39:04.922963Z","caller":"traceutil/trace.go:171","msg":"trace[382725270] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:701; }","duration":"104.813727ms","start":"2025-04-01T20:39:04.818140Z","end":"2025-04-01T20:39:04.922954Z","steps":["trace[382725270] 'agreement among raft nodes before linearized reading'  (duration: 104.571539ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.923317Z","caller":"traceutil/trace.go:171","msg":"trace[1182439] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:701; }","duration":"104.889107ms","start":"2025-04-01T20:39:04.818419Z","end":"2025-04-01T20:39:04.923308Z","steps":["trace[1182439] 'agreement among raft nodes before linearized reading'  (duration: 104.87954ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.923503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.18834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-04-01T20:39:04.923557Z","caller":"traceutil/trace.go:171","msg":"trace[53470254] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:701; }","duration":"105.257596ms","start":"2025-04-01T20:39:04.818292Z","end":"2025-04-01T20:39:04.923549Z","steps":["trace[53470254] 'agreement among raft nodes before linearized reading'  (duration: 105.178511ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:37.619038Z","caller":"traceutil/trace.go:171","msg":"trace[512211353] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"105.547476ms","start":"2025-04-01T20:39:37.513466Z","end":"2025-04-01T20:39:37.619014Z","steps":["trace[512211353] 'process raft request'  (duration: 43.691695ms)","trace[512211353] 'compare'  (duration: 61.757597ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:37.620916Z","caller":"traceutil/trace.go:171","msg":"trace[1272640698] transaction","detail":"{read_only:false; response_revision:824; number_of_response:1; }","duration":"101.494988ms","start":"2025-04-01T20:39:37.519401Z","end":"2025-04-01T20:39:37.620896Z","steps":["trace[1272640698] 'process raft request'  (duration: 101.291053ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:49:03.370677Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":939}
	{"level":"info","ts":"2025-04-01T20:49:03.375303Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":939,"took":"4.360178ms","hash":2566575144,"current-db-size-bytes":1998848,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1998848,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-01T20:49:03.375354Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2566575144,"revision":939,"compact-revision":500}
	
	
	==> kernel <==
	 20:52:10 up  1:34,  0 users,  load average: 0.26, 0.39, 0.98
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [1b409b776938c7f6d6325283fe8d5f7d2038212e8bab65b45b30c12beae6f139] <==
	E0401 20:49:05.643088       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:49:05.643170       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:49:05.644130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:49:05.645218       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:50:05.644771       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:50:05.644831       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:50:05.645901       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:50:05.645938       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:50:05.646054       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:50:05.647166       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:52:05.646633       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:52:05.646685       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0401 20:52:05.647709       1 handler_proxy.go:99] no RequestInfo found in the context
	I0401 20:52:05.647726       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0401 20:52:05.647803       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:52:05.648868       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6709f6284d476f9efda2e9d43e571a75efeb97855b385ce4b1586eaa4de4f1a9] <==
	E0401 20:46:38.888583       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:46:38.924196       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:08.894833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:08.930553       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:47:38.899837       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:47:38.937794       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:08.905352       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:08.946112       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:48:38.910404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:48:38.953233       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:49:08.915578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:08.960371       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:49:38.920369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:49:38.966908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:49:57.027241       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	E0401 20:50:08.925721       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:08.973112       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:38.930608       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:38.979398       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:08.936205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:08.986536       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:38.942414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:38.993024       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:08.947656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:09.000718       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0c4be69226b22952a80da0c17c51cbc7f4486bc715cbe15cc3dd88daecfaf452] <==
	I0401 20:39:06.072071       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:06.448227       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:39:06.461903       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:06.641034       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:06.641193       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:06.661209       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:06.661731       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:06.661779       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.671952       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:06.673686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:06.672521       1 config.go:329] "Starting node config controller"
	I0401 20:39:06.673736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:06.672555       1 config.go:199] "Starting service config controller"
	I0401 20:39:06.673765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:06.774792       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:06.774838       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:06.775459       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9f1f681f3bf4be0d5f99a181b4ddfe1efade3b57adf4f7e82926d6306363cec] <==
	I0401 20:39:02.378239       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:04.549023       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:04.549065       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:04.549076       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:04.549086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:04.727215       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:04.727317       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:04.729809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:04.729861       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:04.730096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:04.730177       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:04.842475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:51:20 embed-certs-974821 kubelet[676]: E0401 20:51:20.246264     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540680246001870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:20 embed-certs-974821 kubelet[676]: E0401 20:51:20.323568     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:25 embed-certs-974821 kubelet[676]: E0401 20:51:25.324967     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:30 embed-certs-974821 kubelet[676]: E0401 20:51:30.247583     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540690247359786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:30 embed-certs-974821 kubelet[676]: E0401 20:51:30.247625     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540690247359786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:30 embed-certs-974821 kubelet[676]: E0401 20:51:30.326187     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:34 embed-certs-974821 kubelet[676]: E0401 20:51:34.274485     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:51:35 embed-certs-974821 kubelet[676]: E0401 20:51:35.327214     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:40 embed-certs-974821 kubelet[676]: E0401 20:51:40.248614     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540700248410588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:40 embed-certs-974821 kubelet[676]: E0401 20:51:40.248659     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540700248410588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:40 embed-certs-974821 kubelet[676]: E0401 20:51:40.328277     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:45 embed-certs-974821 kubelet[676]: E0401 20:51:45.328906     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:48 embed-certs-974821 kubelet[676]: E0401 20:51:48.274067     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:51:50 embed-certs-974821 kubelet[676]: E0401 20:51:50.249990     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540710249802788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:50 embed-certs-974821 kubelet[676]: E0401 20:51:50.250020     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540710249802788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:51:50 embed-certs-974821 kubelet[676]: E0401 20:51:50.330013     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:51:55 embed-certs-974821 kubelet[676]: E0401 20:51:55.330882     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:52:00 embed-certs-974821 kubelet[676]: E0401 20:52:00.251392     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540720251167234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:00 embed-certs-974821 kubelet[676]: E0401 20:52:00.251434     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540720251167234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:00 embed-certs-974821 kubelet[676]: E0401 20:52:00.331931     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:52:02 embed-certs-974821 kubelet[676]: E0401 20:52:02.274492     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:52:05 embed-certs-974821 kubelet[676]: E0401 20:52:05.333700     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:52:10 embed-certs-974821 kubelet[676]: E0401 20:52:10.252765     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540730252516515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:10 embed-certs-974821 kubelet[676]: E0401 20:52:10.252809     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540730252516515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:52:10 embed-certs-974821 kubelet[676]: E0401 20:52:10.334592     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1 (74.588115ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qwn44:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  8m6s (x2 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  16m (x2 over 21m)   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-nnhr5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-x6nnb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-q2fjx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-p4fvg" [ed27ed13-b1a7-4240-bb98-42799c4e74b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0401 20:43:26.123928   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:43:45.468304   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:44:07.013084   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:44:28.649466   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:44:29.791512   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:44:53.251846   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:44:56.735451   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:45:08.531788   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:45:30.078661   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:45:37.514525   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:45:37.710996   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:45:52.854068   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:46:19.799199   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:47:00.577122   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:47:00.774727   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:48:05.582944   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:48:26.124729   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:48:45.467865   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:49:07.012497   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:49:29.791485   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:49:36.325220   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:49:53.251790   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:49:56.735651   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:50:37.514994   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:50:37.710548   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:52:09.803319062 +0000 UTC m=+4015.404250502
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-964633 describe po kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-964633 describe po kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard:
Name:             kubernetes-dashboard-cd95d586-p4fvg
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             <none>
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=cd95d586
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/kubernetes-dashboard-cd95d586
Containers:
kubernetes-dashboard:
Image:      docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Port:       9090/TCP
Host Port:  0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:  <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-56rf2 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kubernetes-dashboard-token-56rf2:
Type:        Secret (a volume populated by a Secret)
SecretName:  kubernetes-dashboard-token-56rf2
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                From               Message
----     ------            ----               ----               -------
Warning  FailedScheduling  12m                default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
Warning  FailedScheduling  11m (x1 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-964633 logs kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-964633 logs kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard:
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.587755812Z",
	            "FinishedAt": "2025-04-01T20:38:52.359374523Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98507353cdf3ad29538d69a6c2ab371dc9afedd5474261071e73baebb06da200",
	            "SandboxKey": "/var/run/docker/netns/98507353cdf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:45:5d:ae:77:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "97180c448aba15ca3cf07e1fc19eac60b297d564aac63d5f4b5b7521b5a4989c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.30327484s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:49:27 old-k8s-version-964633 crio[545]: time="2025-04-01 20:49:27.991090546Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6c4b499a-9391-4e29-a509-7431b3029051 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:49:40 old-k8s-version-964633 crio[545]: time="2025-04-01 20:49:40.990956214Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1ecfea3f-f02a-40ae-89b9-aeff623df78d name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:49:40 old-k8s-version-964633 crio[545]: time="2025-04-01 20:49:40.991194629Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1ecfea3f-f02a-40ae-89b9-aeff623df78d name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:49:52 old-k8s-version-964633 crio[545]: time="2025-04-01 20:49:52.990745199Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=91eb2609-b736-46c0-9418-69a6a7d9cb16 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:49:52 old-k8s-version-964633 crio[545]: time="2025-04-01 20:49:52.990978520Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=91eb2609-b736-46c0-9418-69a6a7d9cb16 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:03 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:03.990881934Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=87d51532-3633-41f5-9b04-381ac20c9828 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:03 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:03.991172695Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=87d51532-3633-41f5-9b04-381ac20c9828 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:18 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:18.990812006Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4686b7cc-7446-476e-8e32-e3a0af1cf681 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:18 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:18.991065574Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4686b7cc-7446-476e-8e32-e3a0af1cf681 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:31 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:31.990962026Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3a9c55f9-2b0d-45c4-9fb3-82fdc12b2b69 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:31 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:31.991267292Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3a9c55f9-2b0d-45c4-9fb3-82fdc12b2b69 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:44 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:44.990788127Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=80c6c26d-12c1-46fd-9052-55c4647def55 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:44 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:44.991095031Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=80c6c26d-12c1-46fd-9052-55c4647def55 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:58 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:58.990802139Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e845b117-b127-4df6-9edb-949e2b1a4890 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:50:58 old-k8s-version-964633 crio[545]: time="2025-04-01 20:50:58.991083659Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e845b117-b127-4df6-9edb-949e2b1a4890 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:11.990762254Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ae2ad2d5-f71e-4e9f-80e7-ad6624136540 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:11.991031063Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ae2ad2d5-f71e-4e9f-80e7-ad6624136540 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:24 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:24.990764299Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=1d8d00a1-ce2b-4f3e-a036-d98ed1125ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:24 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:24.990982164Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=1d8d00a1-ce2b-4f3e-a036-d98ed1125ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:36 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:36.990846263Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=030bcab5-ac6d-4a38-8c00-7d38ff02b109 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:36 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:36.991092908Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=030bcab5-ac6d-4a38-8c00-7d38ff02b109 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:49 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:49.990642738Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2a0d5441-f5a7-4da2-be51-053a1e217495 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:51:49 old-k8s-version-964633 crio[545]: time="2025-04-01 20:51:49.990866821Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2a0d5441-f5a7-4da2-be51-053a1e217495 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:52:00 old-k8s-version-964633 crio[545]: time="2025-04-01 20:52:00.990940431Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5e8ed469-bd15-434d-b06c-e2f9528583d8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:52:00 old-k8s-version-964633 crio[545]: time="2025-04-01 20:52:00.991280089Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5e8ed469-bd15-434d-b06c-e2f9528583d8 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6e2a15624e6b       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   12 minutes ago      Running             kube-proxy                0                   d79aac48145ed       kube-proxy-vb8ks
	476cadc498ed3       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   12 minutes ago      Running             kube-apiserver            0                   a0f2a56e33baf       kube-apiserver-old-k8s-version-964633
	1cf26e38ac1c6       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   12 minutes ago      Running             etcd                      0                   b5c714ec70c88       etcd-old-k8s-version-964633
	e1f3c07569c92       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   12 minutes ago      Running             kube-scheduler            0                   b0dee5245ff96       kube-scheduler-old-k8s-version-964633
	a5bc89e701040       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   12 minutes ago      Running             kube-controller-manager   0                   a0fa04b1b1602       kube-controller-manager-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:52:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:49:50 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:49:50 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:49:50 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:49:50 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 496e4a312fcb4e188c28b44d27ba4111
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25m
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  25m (x5 over 25m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m (x5 over 25m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m (x5 over 25m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 25m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  25m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m                kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 25m                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 13m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 13m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 13m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [1cf26e38ac1c6604c953475ca04f80ac9e1430c2d45615035dcca537258ed713] <==
	2025-04-01 20:48:29.690114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:48:39.690048 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:48:49.690106 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:48:59.690113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:09.690216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:14.365326 I | mvcc: store.index: compact 934
	2025-04-01 20:49:14.366426 I | mvcc: finished scheduled compaction at 934 (took 850.446µs)
	2025-04-01 20:49:19.690104 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:29.690065 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:39.690089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:49.690080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:49:59.690057 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:09.690076 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:19.690132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:29.690116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:39.690081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:49.690068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:50:59.690094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:09.690133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:19.690113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:29.690030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:39.690031 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:49.690038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:51:59.690139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:09.690175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:52:11 up  1:34,  0 users,  load average: 0.26, 0.39, 0.98
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [476cadc498ed38467dee6e6bd14670115232b713370264319c7e5a56ecaeac7d] <==
	I0401 20:48:46.412882       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:48:46.412890       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:49:18.248057       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:49:18.248142       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:49:18.248153       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:49:20.703609       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:49:20.703649       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:49:20.703657       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:50:02.367812       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:50:02.367860       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:50:02.367871       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:50:18.248338       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:50:18.248410       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:50:18.248418       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:50:34.753707       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:50:34.753772       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:50:34.753783       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:51:15.587247       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:51:15.587291       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:51:15.587299       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:51:47.446191       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:51:47.446236       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:51:47.446247       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [a5bc89e701040e08d72357e3dac6043fa2051845c4876d8d4c98324eb1a2f4d5] <==
	W0401 20:47:43.925295       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:48:13.533884       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:48:15.575596       1 request.go:655] Throttling request took 1.048582783s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0401 20:48:16.426594       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:48:44.035275       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:48:48.076725       1 request.go:655] Throttling request took 1.048833427s, request: GET:https://192.168.85.2:8443/apis/apps/v1?timeout=32s
	W0401 20:48:48.927608       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:49:14.536817       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:49:20.577921       1 request.go:655] Throttling request took 1.048681961s, request: GET:https://192.168.85.2:8443/apis/batch/v1?timeout=32s
	W0401 20:49:21.428933       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:49:45.038361       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:49:53.079354       1 request.go:655] Throttling request took 1.048595826s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0401 20:49:53.930122       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:50:15.539742       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:50:25.580411       1 request.go:655] Throttling request took 1.048576603s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0401 20:50:26.431004       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:50:46.039953       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:50:58.081312       1 request.go:655] Throttling request took 1.048765697s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0401 20:50:58.931911       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:51:16.541728       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:51:30.582099       1 request.go:655] Throttling request took 1.048712404s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0401 20:51:31.433318       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:51:47.043450       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:52:03.083632       1 request.go:655] Throttling request took 1.048676489s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0401 20:52:03.934804       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [b6e2a15624e6bfb4518956b54ad139920c531d3fc7c23adccb5f26ae8087b4ae] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0401 20:39:19.459621       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:39:19.459730       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:39:19.469176       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:39:19.469267       1 server_others.go:185] Using iptables Proxier.
	I0401 20:39:19.469492       1 server.go:650] Version: v1.20.0
	I0401 20:39:19.469980       1 config.go:224] Starting endpoint slice config controller
	I0401 20:39:19.469997       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:39:19.470025       1 config.go:315] Starting service config controller
	I0401 20:39:19.470030       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:39:19.570148       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:39:19.570204       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [e1f3c07569c92c3a8447517fe4a29b9a1107cefce6ec8dec3438e2043596f976] <==
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0401 20:39:13.424195       1 serving.go:331] Generated self-signed cert in-memory
	W0401 20:39:17.235518       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:17.235651       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:17.235691       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:17.235733       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:17.536554       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0401 20:39:17.536892       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0401 20:39:17.537005       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.537056       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.642397       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:50:42 old-k8s-version-964633 kubelet[986]: E0401 20:50:42.082895     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:50:44 old-k8s-version-964633 kubelet[986]: E0401 20:50:44.991361     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:50:47 old-k8s-version-964633 kubelet[986]: E0401 20:50:47.083649     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:50:52 old-k8s-version-964633 kubelet[986]: E0401 20:50:52.084361     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:50:57 old-k8s-version-964633 kubelet[986]: E0401 20:50:57.085000     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:50:58 old-k8s-version-964633 kubelet[986]: E0401 20:50:58.991323     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:51:02 old-k8s-version-964633 kubelet[986]: E0401 20:51:02.085695     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:07 old-k8s-version-964633 kubelet[986]: E0401 20:51:07.086307     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:11 old-k8s-version-964633 kubelet[986]: E0401 20:51:11.991267     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:51:12 old-k8s-version-964633 kubelet[986]: E0401 20:51:12.087011     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:17 old-k8s-version-964633 kubelet[986]: E0401 20:51:17.087717     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:22 old-k8s-version-964633 kubelet[986]: E0401 20:51:22.088358     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:24 old-k8s-version-964633 kubelet[986]: E0401 20:51:24.991220     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:51:27 old-k8s-version-964633 kubelet[986]: E0401 20:51:27.089054     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:32 old-k8s-version-964633 kubelet[986]: E0401 20:51:32.089666     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:36 old-k8s-version-964633 kubelet[986]: E0401 20:51:36.991352     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:51:37 old-k8s-version-964633 kubelet[986]: E0401 20:51:37.090315     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:42 old-k8s-version-964633 kubelet[986]: E0401 20:51:42.091039     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:47 old-k8s-version-964633 kubelet[986]: E0401 20:51:47.091721     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:49 old-k8s-version-964633 kubelet[986]: E0401 20:51:49.991081     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:51:52 old-k8s-version-964633 kubelet[986]: E0401 20:51:52.092389     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:51:57 old-k8s-version-964633 kubelet[986]: E0401 20:51:57.093142     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:52:00 old-k8s-version-964633 kubelet[986]: E0401 20:52:00.991544     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:52:02 old-k8s-version-964633 kubelet[986]: E0401 20:52:02.093853     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:52:07 old-k8s-version-964633 kubelet[986]: E0401 20:52:07.094524     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1 (69.825448ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-5nmbk:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-5nmbk
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  12m (x1 over 12m)   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
	  Warning  FailedScheduling  13m (x10 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "metrics-server-9975d5f86-vj6lt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-4cckx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd95d586-p4fvg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (235.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-d2blk" [7a49c269-ae5f-4a52-b427-720736dc552d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:55:53.013291309 +0000 UTC m=+4238.614222757
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-671514 describe po kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe po kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard: context deadline exceeded (2.171µs)
start_stop_delete_test.go:285: kubectl --context no-preload-671514 describe po kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-671514 logs kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-671514 logs kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard: context deadline exceeded (375ns)
start_stop_delete_test.go:285: kubectl --context no-preload-671514 logs kubernetes-dashboard-7779f9b69b-d2blk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-671514 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (308ns)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-671514 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-671514
helpers_test.go:235: (dbg) docker inspect no-preload-671514:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	        "Created": "2025-04-01T20:25:53.686266943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 347539,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:47.214891198Z",
	            "FinishedAt": "2025-04-01T20:38:46.056346181Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/hosts",
	        "LogPath": "/var/lib/docker/containers/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8/4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8-json.log",
	        "Name": "/no-preload-671514",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-671514:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-671514",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b963fad5d9e886c9ec6f3bd6b070e579e7e1a633869d15a536a8711fdb290e8",
	                "LowerDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a9e7dea756430597982fa6d26a171cb98d019175300474f6b4a502bdb1b0a2f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-671514",
	                "Source": "/var/lib/docker/volumes/no-preload-671514/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-671514",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-671514",
	                "name.minikube.sigs.k8s.io": "no-preload-671514",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5bbc852e72936fcd498ad1c3a51d7c1f88352c6a93862744e1874c53a1007c0b",
	            "SandboxKey": "/var/run/docker/netns/5bbc852e7293",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-671514": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:42:07:e3:85:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b666aa65b1b8b24b13025df1315f136e1a045fd16a2b4c481b2ab1513656dae4",
	                    "EndpointID": "3e43b7030559efe8587100f9aafe4e5d830bd7b517b3927b0b1dddcdf10d9cd5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-671514",
	                        "4b963fad5d9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-671514 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-671514 logs -n 25: (1.775654884s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:55 UTC |
	| start   | -p newest-cni-235733 --memory=2200 --alsologtostderr   | newest-cni-235733            | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:55:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:55:51.058989  368392 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:55:51.059115  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059125  368392 out.go:358] Setting ErrFile to fd 2...
	I0401 20:55:51.059129  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059321  368392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:55:51.059942  368392 out.go:352] Setting JSON to false
	I0401 20:55:51.061160  368392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5897,"bootTime":1743535054,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:55:51.061260  368392 start.go:139] virtualization: kvm guest
	I0401 20:55:51.063851  368392 out.go:177] * [newest-cni-235733] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:55:51.065265  368392 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:55:51.065294  368392 notify.go:220] Checking for updates...
	I0401 20:55:51.067422  368392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:55:51.068384  368392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:55:51.069267  368392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:55:51.070173  368392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:55:51.071206  368392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:55:51.072571  368392 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072680  368392 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072792  368392 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072915  368392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:55:51.095817  368392 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:55:51.095892  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.143133  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.134138641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.143250  368392 docker.go:318] overlay module found
	I0401 20:55:51.144967  368392 out.go:177] * Using the docker driver based on user configuration
	I0401 20:55:51.146008  368392 start.go:297] selected driver: docker
	I0401 20:55:51.146024  368392 start.go:901] validating driver "docker" against <nil>
	I0401 20:55:51.146036  368392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:55:51.146923  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.198901  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.190131502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.199094  368392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0401 20:55:51.199131  368392 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0401 20:55:51.199479  368392 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 20:55:51.201405  368392 out.go:177] * Using Docker driver with root privileges
	I0401 20:55:51.202338  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:55:51.202417  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:55:51.202433  368392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:55:51.202524  368392 start.go:340] cluster config:
	{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:55:51.203656  368392 out.go:177] * Starting "newest-cni-235733" primary control-plane node in "newest-cni-235733" cluster
	I0401 20:55:51.204742  368392 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:55:51.205935  368392 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:55:51.207028  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:55:51.207058  368392 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:55:51.207064  368392 cache.go:56] Caching tarball of preloaded images
	I0401 20:55:51.207135  368392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:55:51.207163  368392 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:55:51.207171  368392 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:55:51.207259  368392 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json ...
	I0401 20:55:51.207276  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json: {Name:mk10fae3f4d17094cdcb12dcfa676dc28e751b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:55:51.227186  368392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:55:51.227203  368392 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:55:51.227224  368392 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:55:51.227260  368392 start.go:360] acquireMachinesLock for newest-cni-235733: {Name:mk2bd08d0a606a11f78441bb216ae502c7382305 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:55:51.227360  368392 start.go:364] duration metric: took 83.063µs to acquireMachinesLock for "newest-cni-235733"
	I0401 20:55:51.227399  368392 start.go:93] Provisioning new machine with config: &{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:55:51.227477  368392 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Apr 01 20:52:52 no-preload-671514 crio[550]: time="2025-04-01 20:52:52.556447912Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=b0987d7f-21bf-49cd-8b3a-01553c4cfb47 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:52:52 no-preload-671514 crio[550]: time="2025-04-01 20:52:52.568047713Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:52:55 no-preload-671514 crio[550]: time="2025-04-01 20:52:55.962725565Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:53:40 no-preload-671514 crio[550]: time="2025-04-01 20:53:40.555769598Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6c118857-a6a4-4944-8660-2dad18be45db name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:40 no-preload-671514 crio[550]: time="2025-04-01 20:53:40.556051333Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6c118857-a6a4-4944-8660-2dad18be45db name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:53 no-preload-671514 crio[550]: time="2025-04-01 20:53:53.554955729Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=566f092d-9f84-420f-884e-ea76eb0252e1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:53 no-preload-671514 crio[550]: time="2025-04-01 20:53:53.555181978Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=566f092d-9f84-420f-884e-ea76eb0252e1 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:06 no-preload-671514 crio[550]: time="2025-04-01 20:54:06.555257884Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=98c3f2eb-b990-43f4-8e74-d1148dbbc063 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:06 no-preload-671514 crio[550]: time="2025-04-01 20:54:06.555579106Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=98c3f2eb-b990-43f4-8e74-d1148dbbc063 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:17 no-preload-671514 crio[550]: time="2025-04-01 20:54:17.555494043Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7036bacb-1cde-4301-a27a-3e58f02bd121 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:17 no-preload-671514 crio[550]: time="2025-04-01 20:54:17.555802633Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7036bacb-1cde-4301-a27a-3e58f02bd121 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:29 no-preload-671514 crio[550]: time="2025-04-01 20:54:29.555230877Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a64b3ae2-285a-4fa3-ae1c-a3e608c5ab15 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:29 no-preload-671514 crio[550]: time="2025-04-01 20:54:29.555436921Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a64b3ae2-285a-4fa3-ae1c-a3e608c5ab15 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:41 no-preload-671514 crio[550]: time="2025-04-01 20:54:41.555149345Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=361092de-9576-41d7-91e1-63290747d0dc name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:41 no-preload-671514 crio[550]: time="2025-04-01 20:54:41.555364580Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=361092de-9576-41d7-91e1-63290747d0dc name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:53 no-preload-671514 crio[550]: time="2025-04-01 20:54:53.555032402Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=41084730-d11a-48cc-aa69-298e375d8209 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:53 no-preload-671514 crio[550]: time="2025-04-01 20:54:53.555364525Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=41084730-d11a-48cc-aa69-298e375d8209 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:08 no-preload-671514 crio[550]: time="2025-04-01 20:55:08.555656032Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e3cce753-5874-42ba-affe-98ad5b061bcd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:08 no-preload-671514 crio[550]: time="2025-04-01 20:55:08.555960796Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e3cce753-5874-42ba-affe-98ad5b061bcd name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:23 no-preload-671514 crio[550]: time="2025-04-01 20:55:23.555298571Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3443bf49-438e-4820-9666-82727f008218 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:23 no-preload-671514 crio[550]: time="2025-04-01 20:55:23.555508766Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3443bf49-438e-4820-9666-82727f008218 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:36 no-preload-671514 crio[550]: time="2025-04-01 20:55:36.555642656Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ed261c5b-9fa9-4110-a5f5-62b6436f5f8e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:36 no-preload-671514 crio[550]: time="2025-04-01 20:55:36.555918827Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ed261c5b-9fa9-4110-a5f5-62b6436f5f8e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:48 no-preload-671514 crio[550]: time="2025-04-01 20:55:48.555345648Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=79531cf7-e0cf-414b-90c3-97ce00d78dd3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:48 no-preload-671514 crio[550]: time="2025-04-01 20:55:48.555639759Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=79531cf7-e0cf-414b-90c3-97ce00d78dd3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea145bd33786b       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   16 minutes ago      Running             kube-proxy                1                   ce01896c90f77       kube-proxy-pfvch
	ee48c6782a18b       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   16 minutes ago      Running             kube-apiserver            1                   56ea918890fe0       kube-apiserver-no-preload-671514
	c433696fcee19       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   16 minutes ago      Running             kube-controller-manager   1                   84d0bba648e43       kube-controller-manager-no-preload-671514
	b1d13381b02cc       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   16 minutes ago      Running             kube-scheduler            1                   b988612136b4f       kube-scheduler-no-preload-671514
	c26ee68cb1e41       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   16 minutes ago      Running             etcd                      1                   aba801a800b41       etcd-no-preload-671514
	
	
	==> describe nodes <==
	Name:               no-preload-671514
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-671514
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=no-preload-671514
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_33_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-671514
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:55:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:51:33 +0000   Tue, 01 Apr 2025 20:26:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-671514
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 607874eb563c47059868a4160125dbb6
	  System UUID:                140301ee-9700-46a7-bc42-2a6702dcb846
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-no-preload-671514                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-5tgtq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-no-preload-671514             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-671514    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-pfvch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-no-preload-671514             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientPID     29m                kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 29m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m                kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 29m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29m                node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node no-preload-671514 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node no-preload-671514 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node no-preload-671514 event: Registered Node no-preload-671514 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [c26ee68cb1e41434cb1773276a80f9b07dd93b734f39daae74d2886e50d29ba0] <==
	{"level":"info","ts":"2025-04-01T20:38:55.526538Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:38:57.022450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-04-01T20:38:57.022540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.022579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-04-01T20:38:57.023544Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-671514 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:38:57.023604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023623Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:38:57.023843Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.023936Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:38:57.024487Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.024568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:38:57.025105Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:38:57.025225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:00.430539Z","caller":"traceutil/trace.go:171","msg":"trace[280012238] transaction","detail":"{read_only:false; response_revision:772; number_of_response:1; }","duration":"101.218224ms","start":"2025-04-01T20:39:00.329302Z","end":"2025-04-01T20:39:00.430521Z","steps":["trace[280012238] 'process raft request'  (duration: 46.826091ms)","trace[280012238] 'compare'  (duration: 54.291765ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:00.548330Z","caller":"traceutil/trace.go:171","msg":"trace[1807709246] transaction","detail":"{read_only:false; response_revision:773; number_of_response:1; }","duration":"108.767351ms","start":"2025-04-01T20:39:00.439528Z","end":"2025-04-01T20:39:00.548295Z","steps":["trace[1807709246] 'process raft request'  (duration: 96.291629ms)","trace[1807709246] 'compare'  (duration: 12.091718ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:48:57.043906Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":940}
	{"level":"info","ts":"2025-04-01T20:48:57.048190Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":940,"took":"3.988148ms","hash":4160464570,"current-db-size-bytes":1757184,"current-db-size":"1.8 MB","current-db-size-in-use-bytes":1757184,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-04-01T20:48:57.048225Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4160464570,"revision":940,"compact-revision":505}
	{"level":"info","ts":"2025-04-01T20:53:57.049650Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1059}
	{"level":"info","ts":"2025-04-01T20:53:57.051998Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1059,"took":"2.029656ms","hash":3989537289,"current-db-size-bytes":1757184,"current-db-size":"1.8 MB","current-db-size-in-use-bytes":1028096,"current-db-size-in-use":"1.0 MB"}
	{"level":"info","ts":"2025-04-01T20:53:57.052044Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3989537289,"revision":1059,"compact-revision":940}
	
	
	==> kernel <==
	 20:55:54 up  1:38,  0 users,  load average: 0.78, 0.42, 0.86
	Linux no-preload-671514 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ee48c6782a18ba4755d82a0a5bf1ad1b855dfd1d70fdd7295d33e8a88f8775d5] <==
	I0401 20:51:59.139730       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:51:59.139747       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:53:58.141400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:53:58.141498       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0401 20:53:59.142727       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:53:59.142727       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:53:59.142811       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:53:59.142835       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:53:59.143926       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:53:59.143954       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:54:59.144946       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:54:59.144946       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:54:59.145020       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0401 20:54:59.145029       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0401 20:54:59.146144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:54:59.146157       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c433696fcee19b99e87b3d9433f8add31e3b93cb7663068ef9be96761a9725fd] <==
	E0401 20:50:02.441322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:02.516585       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:50:32.447216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:32.523449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:02.452701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:02.530630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:32.458468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:32.537332       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:51:33.268662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-671514"
	E0401 20:52:02.464097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:02.544334       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:32.469507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:32.551881       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:02.475481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:02.560283       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:32.480335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:32.567729       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:02.486413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:02.574723       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:32.492042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:32.580998       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:55:02.498888       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:02.587817       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:55:32.504335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:32.594421       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [ea145bd33786beab5695edea53c4427b5de9ac7e59c201cefdd36226f43e54ca] <==
	I0401 20:38:59.352570       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:38:59.739049       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0401 20:38:59.739232       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:38:59.932876       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:38:59.932949       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:38:59.936073       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:38:59.936478       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:38:59.936515       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:59.939364       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:00.018698       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:38:59.961970       1 config.go:199] "Starting service config controller"
	I0401 20:39:00.018788       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:38:59.963606       1 config.go:329] "Starting node config controller"
	I0401 20:39:00.018803       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:00.121850       1 shared_informer.go:320] Caches are synced for node config
	I0401 20:39:00.121958       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:00.122020       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b1d13381b02cc94d594efb9905918a3d246d7722a4c6dbc1796409ac561c2e3d] <==
	I0401 20:38:56.385160       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:38:58.139246       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:38:58.139285       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:38:58.139315       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:38:58.139326       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:38:58.244037       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:38:58.244065       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:38:58.245973       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:38:58.246009       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:38:58.246168       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:38:58.246306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:38:58.348872       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:55:04 no-preload-671514 kubelet[663]: E0401 20:55:04.770384     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:08 no-preload-671514 kubelet[663]: E0401 20:55:08.556253     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: initializing source docker://kindest/kindnetd:v20250214-acbabc1a: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:55:09 no-preload-671514 kubelet[663]: E0401 20:55:09.771565     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:14 no-preload-671514 kubelet[663]: E0401 20:55:14.664030     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540914663852852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:14 no-preload-671514 kubelet[663]: E0401 20:55:14.664076     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540914663852852,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:14 no-preload-671514 kubelet[663]: E0401 20:55:14.772333     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:19 no-preload-671514 kubelet[663]: E0401 20:55:19.773770     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:23 no-preload-671514 kubelet[663]: E0401 20:55:23.555795     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: initializing source docker://kindest/kindnetd:v20250214-acbabc1a: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:55:24 no-preload-671514 kubelet[663]: E0401 20:55:24.665223     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540924664957619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:24 no-preload-671514 kubelet[663]: E0401 20:55:24.665267     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540924664957619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:24 no-preload-671514 kubelet[663]: E0401 20:55:24.775480     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:29 no-preload-671514 kubelet[663]: E0401 20:55:29.776899     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:34 no-preload-671514 kubelet[663]: E0401 20:55:34.666363     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540934666124962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:34 no-preload-671514 kubelet[663]: E0401 20:55:34.666412     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540934666124962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:34 no-preload-671514 kubelet[663]: E0401 20:55:34.777732     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:36 no-preload-671514 kubelet[663]: E0401 20:55:36.556161     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: initializing source docker://kindest/kindnetd:v20250214-acbabc1a: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:55:39 no-preload-671514 kubelet[663]: E0401 20:55:39.778866     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:44 no-preload-671514 kubelet[663]: E0401 20:55:44.667344     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540944667147527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:44 no-preload-671514 kubelet[663]: E0401 20:55:44.667383     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540944667147527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:44 no-preload-671514 kubelet[663]: E0401 20:55:44.779574     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:48 no-preload-671514 kubelet[663]: E0401 20:55:48.556023     663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: initializing source docker://kindest/kindnetd:v20250214-acbabc1a: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-5tgtq" podUID="60e1a3a5-d05f-4fb5-98a0-88272ec3ebf5"
	Apr 01 20:55:49 no-preload-671514 kubelet[663]: E0401 20:55:49.780947     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:54 no-preload-671514 kubelet[663]: E0401 20:55:54.668517     663 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540954668278460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:54 no-preload-671514 kubelet[663]: E0401 20:55:54.668565     663 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540954668278460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:92082,},InodesUsed:&UInt64Value{Value:43,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:54 no-preload-671514 kubelet[663]: E0401 20:55:54.782178     663 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-671514 -n no-preload-671514
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-671514 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1 (70.399141ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxxvc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-hxxvc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  11m (x2 over 16m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  19m (x2 over 25m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-vmgsv" not found
	Error from server (NotFound): pods "kindnet-5tgtq" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-28pk4" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-nmk5v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-d2blk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-671514 describe pod busybox coredns-668d6bf9bc-vmgsv kindnet-5tgtq metrics-server-f79f97bbb-28pk4 storage-provisioner dashboard-metrics-scraper-86c6bf9756-nmk5v kubernetes-dashboard-7779f9b69b-d2blk: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (235.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (254.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rwzdk" [b25763a9-af09-4aa5-b4e1-eefefa2ff944] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:56:18.983219571 +0000 UTC m=+4264.584151005
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe po kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe po kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard: context deadline exceeded (1.858µs)
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-993330 describe po kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 logs kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 logs kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard: context deadline exceeded (274ns)
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-993330 logs kubernetes-dashboard-7779f9b69b-rwzdk -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (274ns)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-993330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-993330
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-993330:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	        "Created": "2025-04-01T20:26:24.327880395Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353427,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:54.287928611Z",
	            "FinishedAt": "2025-04-01T20:38:53.06055829Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hostname",
	        "HostsPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/hosts",
	        "LogPath": "/var/lib/docker/containers/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583/311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583-json.log",
	        "Name": "/default-k8s-diff-port-993330",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993330:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993330",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "311426103e1d421defb3b525ed24fa61e940a4b368ee39580ecb09c088beb583",
	                "LowerDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49583a1b5706b27fd9041616b7f6beb3d0b6e75f5b151b7300b2b009392062ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993330",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993330/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993330",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993330",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec09fa1a9496e05123b7a54f35ba87b679a89f15a6b0677344788b51903d4cb2",
	            "SandboxKey": "/var/run/docker/netns/ec09fa1a9496",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-993330": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:be:99:3d:93:11",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b94244e6c484722c7642763834f51a693815053013b68dff43e8ef12487407c",
	                    "EndpointID": "5aaf086e3c391b2394b006ad5aca69dfaf955cf2259cb4d42342fb401f46a6a2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993330",
	                        "311426103e1d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993330 logs -n 25: (1.021158343s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:55 UTC |
	| start   | -p newest-cni-235733 --memory=2200 --alsologtostderr   | newest-cni-235733            | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:55 UTC |
	| delete  | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:56 UTC | 01 Apr 25 20:56 UTC |
	| addons  | enable metrics-server -p newest-cni-235733             | newest-cni-235733            | jenkins | v1.35.0 | 01 Apr 25 20:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:55:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:55:51.058989  368392 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:55:51.059115  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059125  368392 out.go:358] Setting ErrFile to fd 2...
	I0401 20:55:51.059129  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059321  368392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:55:51.059942  368392 out.go:352] Setting JSON to false
	I0401 20:55:51.061160  368392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5897,"bootTime":1743535054,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:55:51.061260  368392 start.go:139] virtualization: kvm guest
	I0401 20:55:51.063851  368392 out.go:177] * [newest-cni-235733] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:55:51.065265  368392 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:55:51.065294  368392 notify.go:220] Checking for updates...
	I0401 20:55:51.067422  368392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:55:51.068384  368392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:55:51.069267  368392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:55:51.070173  368392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:55:51.071206  368392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:55:51.072571  368392 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072680  368392 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072792  368392 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072915  368392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:55:51.095817  368392 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:55:51.095892  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.143133  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.134138641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.143250  368392 docker.go:318] overlay module found
	I0401 20:55:51.144967  368392 out.go:177] * Using the docker driver based on user configuration
	I0401 20:55:51.146008  368392 start.go:297] selected driver: docker
	I0401 20:55:51.146024  368392 start.go:901] validating driver "docker" against <nil>
	I0401 20:55:51.146036  368392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:55:51.146923  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.198901  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.190131502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.199094  368392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0401 20:55:51.199131  368392 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0401 20:55:51.199479  368392 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 20:55:51.201405  368392 out.go:177] * Using Docker driver with root privileges
	I0401 20:55:51.202338  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:55:51.202417  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:55:51.202433  368392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:55:51.202524  368392 start.go:340] cluster config:
	{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:55:51.203656  368392 out.go:177] * Starting "newest-cni-235733" primary control-plane node in "newest-cni-235733" cluster
	I0401 20:55:51.204742  368392 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:55:51.205935  368392 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:55:51.207028  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:55:51.207058  368392 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:55:51.207064  368392 cache.go:56] Caching tarball of preloaded images
	I0401 20:55:51.207135  368392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:55:51.207163  368392 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:55:51.207171  368392 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:55:51.207259  368392 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json ...
	I0401 20:55:51.207276  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json: {Name:mk10fae3f4d17094cdcb12dcfa676dc28e751b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:55:51.227186  368392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:55:51.227203  368392 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:55:51.227224  368392 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:55:51.227260  368392 start.go:360] acquireMachinesLock for newest-cni-235733: {Name:mk2bd08d0a606a11f78441bb216ae502c7382305 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:55:51.227360  368392 start.go:364] duration metric: took 83.063µs to acquireMachinesLock for "newest-cni-235733"
	I0401 20:55:51.227399  368392 start.go:93] Provisioning new machine with config: &{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:55:51.227477  368392 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:55:51.229133  368392 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:55:51.229333  368392 start.go:159] libmachine.API.Create for "newest-cni-235733" (driver="docker")
	I0401 20:55:51.229364  368392 client.go:168] LocalClient.Create starting
	I0401 20:55:51.229457  368392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:55:51.229488  368392 main.go:141] libmachine: Decoding PEM data...
	I0401 20:55:51.229503  368392 main.go:141] libmachine: Parsing certificate...
	I0401 20:55:51.229555  368392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:55:51.229574  368392 main.go:141] libmachine: Decoding PEM data...
	I0401 20:55:51.229584  368392 main.go:141] libmachine: Parsing certificate...
	I0401 20:55:51.229932  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:55:51.246026  368392 cli_runner.go:211] docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:55:51.246082  368392 network_create.go:284] running [docker network inspect newest-cni-235733] to gather additional debugging logs...
	I0401 20:55:51.246099  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733
	W0401 20:55:51.262257  368392 cli_runner.go:211] docker network inspect newest-cni-235733 returned with exit code 1
	I0401 20:55:51.262288  368392 network_create.go:287] error running [docker network inspect newest-cni-235733]: docker network inspect newest-cni-235733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-235733 not found
	I0401 20:55:51.262319  368392 network_create.go:289] output of [docker network inspect newest-cni-235733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-235733 not found
	
	** /stderr **
	I0401 20:55:51.262459  368392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:55:51.279358  368392 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:55:51.280014  368392 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:55:51.280866  368392 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:55:51.281444  368392 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:55:51.282361  368392 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f21ee0}
	I0401 20:55:51.282390  368392 network_create.go:124] attempt to create docker network newest-cni-235733 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0401 20:55:51.282432  368392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-235733 newest-cni-235733
	I0401 20:55:51.335951  368392 network_create.go:108] docker network newest-cni-235733 192.168.85.0/24 created
	I0401 20:55:51.335989  368392 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-235733" container
	I0401 20:55:51.336054  368392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:55:51.353323  368392 cli_runner.go:164] Run: docker volume create newest-cni-235733 --label name.minikube.sigs.k8s.io=newest-cni-235733 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:55:51.371940  368392 oci.go:103] Successfully created a docker volume newest-cni-235733
	I0401 20:55:51.372002  368392 cli_runner.go:164] Run: docker run --rm --name newest-cni-235733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-235733 --entrypoint /usr/bin/test -v newest-cni-235733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:55:51.827512  368392 oci.go:107] Successfully prepared a docker volume newest-cni-235733
	I0401 20:55:51.827553  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:55:51.827576  368392 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:55:51.827640  368392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-235733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:55:56.433713  368392 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-235733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.606030833s)
	I0401 20:55:56.433774  368392 kic.go:203] duration metric: took 4.606167722s to extract preloaded images to volume ...
	W0401 20:55:56.433934  368392 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:55:56.434054  368392 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:55:56.487877  368392 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-235733 --name newest-cni-235733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-235733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-235733 --network newest-cni-235733 --ip 192.168.85.2 --volume newest-cni-235733:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:55:56.781961  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Running}}
	I0401 20:55:56.801975  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:56.823816  368392 cli_runner.go:164] Run: docker exec newest-cni-235733 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:55:56.871442  368392 oci.go:144] the created container "newest-cni-235733" has a running status.
	I0401 20:55:56.871479  368392 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa...
	I0401 20:55:56.943607  368392 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:55:56.965673  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:56.984405  368392 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:55:56.984436  368392 kic_runner.go:114] Args: [docker exec --privileged newest-cni-235733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:55:57.026627  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:57.045380  368392 machine.go:93] provisionDockerMachine start ...
	I0401 20:55:57.045499  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:55:57.065297  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:55:57.065540  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:55:57.065556  368392 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:55:57.066345  368392 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33982->127.0.0.1:33128: read: connection reset by peer
	I0401 20:56:00.201231  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-235733
	
	I0401 20:56:00.201265  368392 ubuntu.go:169] provisioning hostname "newest-cni-235733"
	I0401 20:56:00.201326  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.218955  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:00.219170  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:00.219186  368392 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-235733 && echo "newest-cni-235733" | sudo tee /etc/hostname
	I0401 20:56:00.364461  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-235733
	
	I0401 20:56:00.364536  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.382885  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:00.383163  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:00.383188  368392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-235733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-235733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-235733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:56:00.513782  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:56:00.513823  368392 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:56:00.513858  368392 ubuntu.go:177] setting up certificates
	I0401 20:56:00.513870  368392 provision.go:84] configureAuth start
	I0401 20:56:00.513928  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:00.531771  368392 provision.go:143] copyHostCerts
	I0401 20:56:00.531843  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:56:00.531857  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:56:00.531926  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:56:00.532047  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:56:00.532057  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:56:00.532101  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:56:00.532213  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:56:00.532225  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:56:00.532261  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:56:00.532352  368392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.newest-cni-235733 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-235733]
	I0401 20:56:00.841413  368392 provision.go:177] copyRemoteCerts
	I0401 20:56:00.841482  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:56:00.841519  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.859012  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:00.954422  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:56:00.976662  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:56:00.997804  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:56:01.019589  368392 provision.go:87] duration metric: took 505.701108ms to configureAuth
	I0401 20:56:01.019637  368392 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:56:01.019807  368392 config.go:182] Loaded profile config "newest-cni-235733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:56:01.019899  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.037734  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:01.037948  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:01.037965  368392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:56:01.259923  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:56:01.259950  368392 machine.go:96] duration metric: took 4.214543608s to provisionDockerMachine
	I0401 20:56:01.259962  368392 client.go:171] duration metric: took 10.030591312s to LocalClient.Create
	I0401 20:56:01.259985  368392 start.go:167] duration metric: took 10.030653406s to libmachine.API.Create "newest-cni-235733"
	I0401 20:56:01.259996  368392 start.go:293] postStartSetup for "newest-cni-235733" (driver="docker")
	I0401 20:56:01.260012  368392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:56:01.260081  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:56:01.260140  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.278014  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.374588  368392 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:56:01.377642  368392 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:56:01.377683  368392 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:56:01.377691  368392 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:56:01.377696  368392 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:56:01.377712  368392 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:56:01.377786  368392 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:56:01.377882  368392 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:56:01.377978  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:56:01.385515  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:56:01.407351  368392 start.go:296] duration metric: took 147.337207ms for postStartSetup
	I0401 20:56:01.407668  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:01.425537  368392 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json ...
	I0401 20:56:01.425801  368392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:56:01.425842  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.442888  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.534314  368392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:56:01.538232  368392 start.go:128] duration metric: took 10.310743416s to createHost
	I0401 20:56:01.538254  368392 start.go:83] releasing machines lock for "newest-cni-235733", held for 10.310879717s
	I0401 20:56:01.538315  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:01.555159  368392 ssh_runner.go:195] Run: cat /version.json
	I0401 20:56:01.555215  368392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:56:01.555230  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.555260  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.572582  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.573374  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.740900  368392 ssh_runner.go:195] Run: systemctl --version
	I0401 20:56:01.745485  368392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:56:01.884976  368392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:56:01.889616  368392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:56:01.909087  368392 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:56:01.909171  368392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:56:01.936240  368392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:56:01.936271  368392 start.go:495] detecting cgroup driver to use...
	I0401 20:56:01.936302  368392 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:56:01.936359  368392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:56:01.950872  368392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:56:01.960607  368392 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:56:01.960650  368392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:56:01.972897  368392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:56:01.985549  368392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:56:02.062345  368392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:56:02.145739  368392 docker.go:233] disabling docker service ...
	I0401 20:56:02.145830  368392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:56:02.165178  368392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:56:02.175898  368392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:56:02.255098  368392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:56:02.339857  368392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:56:02.350977  368392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:56:02.368999  368392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:56:02.369065  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.378217  368392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:56:02.378284  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.387556  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.396151  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.405298  368392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:56:02.413977  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.422877  368392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.437362  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.446187  368392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:56:02.453840  368392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:56:02.461242  368392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:56:02.543750  368392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:56:02.655731  368392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:56:02.655797  368392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:56:02.659253  368392 start.go:563] Will wait 60s for crictl version
	I0401 20:56:02.659301  368392 ssh_runner.go:195] Run: which crictl
	I0401 20:56:02.662408  368392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:56:02.695001  368392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:56:02.695070  368392 ssh_runner.go:195] Run: crio --version
	I0401 20:56:02.730815  368392 ssh_runner.go:195] Run: crio --version
	I0401 20:56:02.767535  368392 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:56:02.768829  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:56:02.785723  368392 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:56:02.789150  368392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:56:02.801035  368392 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0401 20:56:02.802073  368392 kubeadm.go:883] updating cluster {Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:56:02.802191  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:56:02.802244  368392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:56:02.868310  368392 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:56:02.868336  368392 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:56:02.868398  368392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:56:02.900261  368392 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:56:02.900287  368392 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:56:02.900297  368392 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 crio true true} ...
	I0401 20:56:02.900421  368392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-235733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:56:02.900499  368392 ssh_runner.go:195] Run: crio config
	I0401 20:56:02.943050  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:56:02.943079  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:56:02.943094  368392 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0401 20:56:02.943124  368392 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-235733 NodeName:newest-cni-235733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:56:02.943274  368392 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-235733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:56:02.943346  368392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:56:02.951730  368392 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:56:02.951797  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:56:02.959547  368392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:56:02.975630  368392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:56:02.991591  368392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0401 20:56:03.007710  368392 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:56:03.010894  368392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:56:03.020411  368392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:56:03.093970  368392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:56:03.106151  368392 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733 for IP: 192.168.85.2
	I0401 20:56:03.106190  368392 certs.go:194] generating shared ca certs ...
	I0401 20:56:03.106211  368392 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.106378  368392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:56:03.106442  368392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:56:03.106462  368392 certs.go:256] generating profile certs ...
	I0401 20:56:03.106537  368392 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key
	I0401 20:56:03.106553  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt with IP's: []
	I0401 20:56:03.204839  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt ...
	I0401 20:56:03.204867  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt: {Name:mk2b69fd1306e9574a49b180a189491c14b919dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.205024  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key ...
	I0401 20:56:03.205034  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key: {Name:mk3c51db9ab5b1dcb6ddd2277dc33090ae7db9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.205110  368392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43
	I0401 20:56:03.205125  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0401 20:56:03.422662  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 ...
	I0401 20:56:03.422691  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43: {Name:mk378bd109209d6442c5c366c4811c1c8cc57546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.422864  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43 ...
	I0401 20:56:03.422876  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43: {Name:mk8e12228d94a3c114518f48dee47afd45daf3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.422955  368392 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt
	I0401 20:56:03.423062  368392 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key
	I0401 20:56:03.423122  368392 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key
	I0401 20:56:03.423137  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt with IP's: []
	I0401 20:56:03.572439  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt ...
	I0401 20:56:03.572465  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt: {Name:mk7e6c1669db6a08319c704f213e0c41e5626dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.572622  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key ...
	I0401 20:56:03.572637  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key: {Name:mk0f51bdc73ef3dd1180e988e0edb4d0bce3415f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.572813  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:56:03.572846  368392 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:56:03.572856  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:56:03.572876  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:56:03.572901  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:56:03.572922  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:56:03.572959  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:56:03.573487  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:56:03.595736  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:56:03.616757  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:56:03.637910  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:56:03.659062  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:56:03.679531  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:56:03.700054  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:56:03.720914  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:56:03.741541  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:56:03.764200  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:56:03.787280  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:56:03.809685  368392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:56:03.825804  368392 ssh_runner.go:195] Run: openssl version
	I0401 20:56:03.830738  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:56:03.839046  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.842192  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.842253  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.848450  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:56:03.856851  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:56:03.865289  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.868736  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.868789  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.874847  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:56:03.883272  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:56:03.891739  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.895096  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.895152  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.901418  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:56:03.910088  368392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:56:03.913097  368392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:56:03.913154  368392 kubeadm.go:392] StartCluster: {Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:56:03.913223  368392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:56:03.913271  368392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:56:03.946340  368392 cri.go:89] found id: ""
	I0401 20:56:03.946415  368392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:56:03.954894  368392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:56:03.962594  368392 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:56:03.962639  368392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:56:03.970078  368392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:56:03.970096  368392 kubeadm.go:157] found existing configuration files:
	
	I0401 20:56:03.970130  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:56:03.977625  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:56:03.977667  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:56:03.985314  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:56:03.992775  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:56:03.992824  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:56:04.000385  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:56:04.007981  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:56:04.008029  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:56:04.015291  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:56:04.022995  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:56:04.023054  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:56:04.030402  368392 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:56:04.067512  368392 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:56:04.067576  368392 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:56:04.083657  368392 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:56:04.083752  368392 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:56:04.083828  368392 kubeadm.go:310] OS: Linux
	I0401 20:56:04.083896  368392 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:56:04.083976  368392 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:56:04.084034  368392 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:56:04.084101  368392 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:56:04.084154  368392 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:56:04.084218  368392 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:56:04.084282  368392 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:56:04.084347  368392 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:56:04.084412  368392 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:56:04.137365  368392 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:56:04.137535  368392 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:56:04.137707  368392 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:56:04.144502  368392 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:56:04.148081  368392 out.go:235]   - Generating certificates and keys ...
	I0401 20:56:04.148163  368392 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:56:04.148256  368392 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:56:04.468367  368392 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:56:04.595348  368392 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:56:04.846696  368392 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:56:04.933291  368392 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:56:05.155272  368392 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:56:05.155421  368392 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-235733] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:56:05.379489  368392 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:56:05.379675  368392 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-235733] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:56:05.525019  368392 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:56:05.810103  368392 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:56:05.874624  368392 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:56:05.874709  368392 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:56:06.177958  368392 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:56:06.386480  368392 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:56:06.536853  368392 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:56:06.902813  368392 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:56:07.073460  368392 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:56:07.074022  368392 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:56:07.076399  368392 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:56:07.078643  368392 out.go:235]   - Booting up control plane ...
	I0401 20:56:07.078738  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:56:07.078814  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:56:07.078892  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:56:07.087142  368392 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:56:07.092214  368392 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:56:07.092285  368392 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:56:07.173012  368392 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:56:07.173192  368392 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:56:07.673791  368392 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.913575ms
	I0401 20:56:07.673891  368392 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0401 20:56:12.175384  368392 kubeadm.go:310] [api-check] The API server is healthy after 4.501585833s
	I0401 20:56:12.187633  368392 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0401 20:56:12.198151  368392 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0401 20:56:12.216078  368392 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0401 20:56:12.216326  368392 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-235733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0401 20:56:12.223093  368392 kubeadm.go:310] [bootstrap-token] Using token: 01y8gn.2ag4nsbud76zrx7l
	I0401 20:56:12.224592  368392 out.go:235]   - Configuring RBAC rules ...
	I0401 20:56:12.224756  368392 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0401 20:56:12.228218  368392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0401 20:56:12.233444  368392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0401 20:56:12.237072  368392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0401 20:56:12.239416  368392 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0401 20:56:12.241912  368392 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0401 20:56:12.582414  368392 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0401 20:56:13.028115  368392 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0401 20:56:13.581047  368392 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0401 20:56:13.581912  368392 kubeadm.go:310] 
	I0401 20:56:13.581992  368392 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0401 20:56:13.582001  368392 kubeadm.go:310] 
	I0401 20:56:13.582091  368392 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0401 20:56:13.582101  368392 kubeadm.go:310] 
	I0401 20:56:13.582130  368392 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0401 20:56:13.582200  368392 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0401 20:56:13.582272  368392 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0401 20:56:13.582280  368392 kubeadm.go:310] 
	I0401 20:56:13.582345  368392 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0401 20:56:13.582353  368392 kubeadm.go:310] 
	I0401 20:56:13.582416  368392 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0401 20:56:13.582424  368392 kubeadm.go:310] 
	I0401 20:56:13.582482  368392 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0401 20:56:13.582613  368392 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0401 20:56:13.582693  368392 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0401 20:56:13.582701  368392 kubeadm.go:310] 
	I0401 20:56:13.582800  368392 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0401 20:56:13.582892  368392 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0401 20:56:13.582900  368392 kubeadm.go:310] 
	I0401 20:56:13.583003  368392 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 01y8gn.2ag4nsbud76zrx7l \
	I0401 20:56:13.583160  368392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 \
	I0401 20:56:13.583203  368392 kubeadm.go:310] 	--control-plane 
	I0401 20:56:13.583208  368392 kubeadm.go:310] 
	I0401 20:56:13.583317  368392 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0401 20:56:13.583338  368392 kubeadm.go:310] 
	I0401 20:56:13.583460  368392 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 01y8gn.2ag4nsbud76zrx7l \
	I0401 20:56:13.583622  368392 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d93fb35a345f61a73d1a5c805e66c154297b8bb9225b71f12b591697818ec37 
	I0401 20:56:13.586716  368392 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0401 20:56:13.586962  368392 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0401 20:56:13.587058  368392 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0401 20:56:13.587089  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:56:13.587099  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:56:13.589574  368392 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0401 20:56:13.590602  368392 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0401 20:56:13.594237  368392 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0401 20:56:13.594263  368392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0401 20:56:13.689721  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0401 20:56:13.899147  368392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0401 20:56:13.899237  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:13.899268  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-235733 minikube.k8s.io/updated_at=2025_04_01T20_56_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a minikube.k8s.io/name=newest-cni-235733 minikube.k8s.io/primary=true
	I0401 20:56:14.030419  368392 ops.go:34] apiserver oom_adj: -16
	I0401 20:56:14.030437  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:14.530933  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:15.030938  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:15.530984  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:16.031503  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:16.530962  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:17.030958  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:17.530966  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:18.030948  368392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0401 20:56:18.094558  368392 kubeadm.go:1113] duration metric: took 4.195382513s to wait for elevateKubeSystemPrivileges
	I0401 20:56:18.094597  368392 kubeadm.go:394] duration metric: took 14.181445612s to StartCluster
	I0401 20:56:18.094618  368392 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:18.094705  368392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:56:18.095942  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:18.096198  368392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0401 20:56:18.096215  368392 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:56:18.096266  368392 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:56:18.096370  368392 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-235733"
	I0401 20:56:18.096383  368392 addons.go:69] Setting default-storageclass=true in profile "newest-cni-235733"
	I0401 20:56:18.096389  368392 config.go:182] Loaded profile config "newest-cni-235733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:56:18.096404  368392 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-235733"
	I0401 20:56:18.096418  368392 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-235733"
	I0401 20:56:18.096440  368392 host.go:66] Checking if "newest-cni-235733" exists ...
	I0401 20:56:18.096785  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:56:18.096961  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:56:18.097917  368392 out.go:177] * Verifying Kubernetes components...
	I0401 20:56:18.099260  368392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:56:18.117549  368392 addons.go:238] Setting addon default-storageclass=true in "newest-cni-235733"
	I0401 20:56:18.117590  368392 host.go:66] Checking if "newest-cni-235733" exists ...
	I0401 20:56:18.118065  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:56:18.118848  368392 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:56:18.120094  368392 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:56:18.120111  368392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:56:18.120148  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:18.138389  368392 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:56:18.138414  368392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:56:18.138472  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:18.141881  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:18.163881  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:18.219869  368392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0401 20:56:18.255483  368392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:56:18.338013  368392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:56:18.339036  368392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:56:18.529653  368392 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0401 20:56:18.530589  368392 api_server.go:52] waiting for apiserver process to appear ...
	I0401 20:56:18.530641  368392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:56:18.776859  368392 api_server.go:72] duration metric: took 680.617286ms to wait for apiserver process to appear ...
	I0401 20:56:18.776885  368392 api_server.go:88] waiting for apiserver healthz status ...
	I0401 20:56:18.776905  368392 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0401 20:56:18.781881  368392 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0401 20:56:18.782833  368392 api_server.go:141] control plane version: v1.32.2
	I0401 20:56:18.782862  368392 api_server.go:131] duration metric: took 5.969028ms to wait for apiserver health ...
	I0401 20:56:18.782876  368392 system_pods.go:43] waiting for kube-system pods to appear ...
	I0401 20:56:18.783523  368392 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0401 20:56:18.784567  368392 addons.go:514] duration metric: took 688.300996ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0401 20:56:18.786084  368392 system_pods.go:59] 8 kube-system pods found
	I0401 20:56:18.786116  368392 system_pods.go:61] "coredns-668d6bf9bc-jx42m" [db026352-fb22-46f0-aa2b-2989954c6909] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0401 20:56:18.786126  368392 system_pods.go:61] "etcd-newest-cni-235733" [e9d92ee1-9e2d-471a-afb5-5580155e15ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0401 20:56:18.786135  368392 system_pods.go:61] "kindnet-gzzzn" [2dd24e50-4f14-4d3b-8956-4fcdaa06d528] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0401 20:56:18.786141  368392 system_pods.go:61] "kube-apiserver-newest-cni-235733" [0339955f-31fa-4892-a61b-30d0c13849bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0401 20:56:18.786158  368392 system_pods.go:61] "kube-controller-manager-newest-cni-235733" [f1102a71-b178-4b1e-bdaf-b7b471b4e4ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0401 20:56:18.786173  368392 system_pods.go:61] "kube-proxy-fkw5k" [60e9dceb-e905-407a-ad18-7f78414d38cb] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0401 20:56:18.786178  368392 system_pods.go:61] "kube-scheduler-newest-cni-235733" [66000131-0352-4a76-8392-4824652b5641] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0401 20:56:18.786188  368392 system_pods.go:61] "storage-provisioner" [5d8510d9-89e1-4233-aad2-0cba1f33969e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0401 20:56:18.786195  368392 system_pods.go:74] duration metric: took 3.309174ms to wait for pod list to return data ...
	I0401 20:56:18.786205  368392 default_sa.go:34] waiting for default service account to be created ...
	I0401 20:56:18.788092  368392 default_sa.go:45] found service account: "default"
	I0401 20:56:18.788107  368392 default_sa.go:55] duration metric: took 1.898223ms for default service account to be created ...
	I0401 20:56:18.788117  368392 kubeadm.go:582] duration metric: took 691.88013ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 20:56:18.788129  368392 node_conditions.go:102] verifying NodePressure condition ...
	I0401 20:56:18.790260  368392 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0401 20:56:18.790292  368392 node_conditions.go:123] node cpu capacity is 8
	I0401 20:56:18.790313  368392 node_conditions.go:105] duration metric: took 2.178553ms to run NodePressure ...
	I0401 20:56:18.790326  368392 start.go:241] waiting for startup goroutines ...
	I0401 20:56:19.033899  368392 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-235733" context rescaled to 1 replicas
	I0401 20:56:19.033951  368392 start.go:246] waiting for cluster config update ...
	I0401 20:56:19.033967  368392 start.go:255] writing updated cluster config ...
	I0401 20:56:19.034232  368392 ssh_runner.go:195] Run: rm -f paused
	I0401 20:56:19.090158  368392 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0401 20:56:19.093020  368392 out.go:177] * Done! kubectl is now configured to use "newest-cni-235733" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 01 20:52:57 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:52:57.654732087Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:53:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:53:43.644918100Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c9507281-70ac-454a-81af-f099a56bc632 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:53:43.645199739Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c9507281-70ac-454a-81af-f099a56bc632 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:55 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:53:55.644431643Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c9775a09-5f12-4849-8f1d-233e57280fac name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:55 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:53:55.644708586Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c9775a09-5f12-4849-8f1d-233e57280fac name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:08 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:08.644451821Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c205d4a0-9c16-4e38-8339-7412c101fc3b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:08 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:08.644714277Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c205d4a0-9c16-4e38-8339-7412c101fc3b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:20 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:20.644229388Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=916b6d68-ba5a-472d-b0e3-a380820aaf41 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:20 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:20.644469598Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=916b6d68-ba5a-472d-b0e3-a380820aaf41 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:35 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:35.645024982Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a49aba82-908b-4a73-81ab-90af042ddb0b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:35 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:35.645330381Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a49aba82-908b-4a73-81ab-90af042ddb0b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:49 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:49.644172706Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4866fdeb-c6f0-436f-8a89-498362bced5b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:49 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:54:49.644457417Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4866fdeb-c6f0-436f-8a89-498362bced5b name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:04 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:04.644919453Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=04a7b500-ccbf-48fb-b15f-29d89e6b6fb3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:04 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:04.645138474Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=04a7b500-ccbf-48fb-b15f-29d89e6b6fb3 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:16 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:16.644713436Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=009c1700-ca3c-4b98-ab25-a078b0f21a35 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:16 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:16.644908909Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=009c1700-ca3c-4b98-ab25-a078b0f21a35 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:29 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:29.644075155Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=31a5c015-5ec3-46e7-b462-ce560a3699ad name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:29 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:29.644359737Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=31a5c015-5ec3-46e7-b462-ce560a3699ad name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:43.644393147Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=19044930-acc0-42db-9dc0-d5ef110ede25 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:43 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:43.644629996Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=19044930-acc0-42db-9dc0-d5ef110ede25 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:56 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:56.644815430Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=2d1b0e1d-97cc-48f9-afdd-dd5ec780ff7e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:56 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:55:56.645051147Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=2d1b0e1d-97cc-48f9-afdd-dd5ec780ff7e name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:56:09 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:56:09.644426556Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=4fe59800-a375-4a56-9c55-d5eaaf2f6ca8 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:56:09 default-k8s-diff-port-993330 crio[551]: time="2025-04-01 20:56:09.644720938Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=4fe59800-a375-4a56-9c55-d5eaaf2f6ca8 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f01b95ee70b78       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   17 minutes ago      Running             kube-proxy                1                   c991b896744f3       kube-proxy-btnmc
	65a195d0c0eee       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   17 minutes ago      Running             kube-scheduler            1                   c122dcfc3b396       kube-scheduler-default-k8s-diff-port-993330
	3fc5e3c8360ed       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   17 minutes ago      Running             kube-apiserver            1                   ed07a91d341b7       kube-apiserver-default-k8s-diff-port-993330
	359dfdc6cc6fc       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   17 minutes ago      Running             kube-controller-manager   1                   5aa5cbe680b17       kube-controller-manager-default-k8s-diff-port-993330
	97f8ee6669267       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 minutes ago      Running             etcd                      1                   81f7f6b1c2968       etcd-default-k8s-diff-port-993330
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=default-k8s-diff-port-993330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_40_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:36 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993330
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:56:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:54:44 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:54:44 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:54:44 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:54:44 +0000   Tue, 01 Apr 2025 20:26:35 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-993330
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 a059a387258444c8a5d2ccbb6a4f4f0c
	  System UUID:                456ef2c1-e31c-4f0b-afee-ce614815c518
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-diff-port-993330                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-9xbmt                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-993330             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993330    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-btnmc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-993330             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29m                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 29m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   Starting                 29m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-993330 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node default-k8s-diff-port-993330 event: Registered Node default-k8s-diff-port-993330 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [97f8ee6669267ad80232ce8bf71fc941954cb5cbcd412ad8213873a5a511b38b] <==
	{"level":"info","ts":"2025-04-01T20:39:02.920963Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:02.920994Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-01T20:39:04.749608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-04-01T20:39:04.749862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.749947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-04-01T20:39:04.750727Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:default-k8s-diff-port-993330 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-01T20:39:04.750738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.750768Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-01T20:39:04.751743Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.752148Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752189Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:04.752606Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-04-01T20:39:04.752611Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:04.753116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:38.126747Z","caller":"traceutil/trace.go:171","msg":"trace[1345586840] transaction","detail":"{read_only:false; response_revision:853; number_of_response:1; }","duration":"118.996467ms","start":"2025-04-01T20:39:38.007727Z","end":"2025-04-01T20:39:38.126724Z","steps":["trace[1345586840] 'process raft request'  (duration: 56.085101ms)","trace[1345586840] 'compare'  (duration: 62.811604ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:49:04.766909Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":971}
	{"level":"info","ts":"2025-04-01T20:49:04.771402Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":971,"took":"4.247389ms","hash":3246169667,"current-db-size-bytes":1921024,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1921024,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-04-01T20:49:04.771435Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3246169667,"revision":971,"compact-revision":537}
	{"level":"info","ts":"2025-04-01T20:54:04.771753Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1091}
	{"level":"info","ts":"2025-04-01T20:54:04.774215Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1091,"took":"2.219516ms","hash":1466400135,"current-db-size-bytes":1921024,"current-db-size":"1.9 MB","current-db-size-in-use-bytes":1093632,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-04-01T20:54:04.774248Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1466400135,"revision":1091,"compact-revision":971}
	
	
	==> kernel <==
	 20:56:20 up  1:38,  0 users,  load average: 1.41, 0.60, 0.91
	Linux default-k8s-diff-port-993330 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [3fc5e3c8360edb7984be32faf8eef372adf72360ea8d96ce692122c037453681] <==
	I0401 20:52:07.156282       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:52:07.156302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:54:06.154237       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:54:06.154355       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0401 20:54:07.156452       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:54:07.156479       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:54:07.156518       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:54:07.156550       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:54:07.157628       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:54:07.157645       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:55:07.158096       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:55:07.158111       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:55:07.158148       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:55:07.158195       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:55:07.159261       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:55:07.159282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [359dfdc6cc6fc25f3136a3577c905adb20d4762ca289cc023c7aa3e8c0221998] <==
	E0401 20:50:39.438539       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:39.484803       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:09.444293       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:09.491313       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:39.449403       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:39.498659       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:09.455471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:09.506398       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:39.460600       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:39.513398       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:09.465878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:09.519874       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:39.470638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:39.526502       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:09.476256       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:09.532919       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:39.481578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:39.539131       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:54:44.026314       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-993330"
	E0401 20:55:09.487182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:09.546737       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:55:39.492686       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:39.553189       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:56:09.497693       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:56:09.559702       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [f01b95ee70b78d448bb8f831dc34b6c7ae96d0ccbdce6b18c2c076cbba24760e] <==
	I0401 20:39:07.540137       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:07.958690       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.103.2"]
	E0401 20:39:07.959920       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:08.054675       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:08.055270       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:08.058888       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:08.059395       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:08.059435       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:08.060790       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:08.060804       1 config.go:199] "Starting service config controller"
	I0401 20:39:08.060830       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:08.060832       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:08.061405       1 config.go:329] "Starting node config controller"
	I0401 20:39:08.061423       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:08.160990       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:08.160982       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:08.161646       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [65a195d0c0eee552be400b60ac82ad3be750b1213af7968bc93e67d39c09622b] <==
	I0401 20:39:03.764615       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:06.018090       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:06.042164       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:06.042298       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:06.042343       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:06.146155       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:06.146255       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.153712       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:06.156339       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:06.161882       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:06.158746       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:06.263913       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:55:26 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:26.871651     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:29 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:29.644611     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:55:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:31.754383     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540931754166310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:31.754424     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540931754166310,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:31 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:31.872894     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:36 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:36.874475     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:41.755489     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540941755291402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:41.755536     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540941755291402,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:41 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:41.875956     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:43 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:43.644853     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:55:46 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:46.877257     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:51.756501     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540951756341054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:51.756547     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540951756341054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:51 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:51.878178     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:56 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:56.645349     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:55:56 default-k8s-diff-port-993330 kubelet[668]: E0401 20:55:56.879662     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:01.757597     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540961757404873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:01.757645     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540961757404873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:01 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:01.881002     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:06 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:06.882060     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:09 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:09.644983     668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-9xbmt" podUID="68b2c7ae-356c-49af-994e-ada27ca91c66"
	Apr 01 20:56:11 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:11.758792     668 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540971758599717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:11 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:11.758834     668 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540971758599717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:11 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:11.883662     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:16 default-k8s-diff-port-993330 kubelet[668]: E0401 20:56:16.885103     668 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1 (78.11618ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wrpd (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7wrpd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  20m (x2 over 25m)    default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m14s (x4 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-5qtb7" not found
	Error from server (NotFound): pods "kindnet-9xbmt" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-998nd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-dskhc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-rwzdk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-993330 describe pod busybox coredns-668d6bf9bc-5qtb7 kindnet-9xbmt metrics-server-f79f97bbb-998nd storage-provisioner dashboard-metrics-scraper-86c6bf9756-dskhc kubernetes-dashboard-7779f9b69b-rwzdk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (254.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (241.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-q2fjx" [6ed5edcd-f3a9-4177-bc48-6176cfd8c20d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:56:10.949844603 +0000 UTC m=+4256.550776046
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-974821 describe po kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe po kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard: context deadline exceeded (1.589µs)
start_stop_delete_test.go:285: kubectl --context embed-certs-974821 describe po kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-974821 logs kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-974821 logs kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard: context deadline exceeded (242ns)
start_stop_delete_test.go:285: kubectl --context embed-certs-974821 logs kubernetes-dashboard-7779f9b69b-q2fjx -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-974821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.058µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-974821 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-974821
helpers_test.go:235: (dbg) docker inspect embed-certs-974821:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	        "Created": "2025-04-01T20:26:16.868604555Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352010,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.286446875Z",
	            "FinishedAt": "2025-04-01T20:38:52.118073098Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/hosts",
	        "LogPath": "/var/lib/docker/containers/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b/b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b-json.log",
	        "Name": "/embed-certs-974821",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-974821:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-974821",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b26f988fd8897d3f37d28b28159549757eeac32be74ff882a076acca4d542c5b",
	                "LowerDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5316044df4a4cd531f89a880ff609c3e4c6db05948a94223074a72f0f590a972/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-974821",
	                "Source": "/var/lib/docker/volumes/embed-certs-974821/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-974821",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-974821",
	                "name.minikube.sigs.k8s.io": "embed-certs-974821",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a400a933eabcb680d1a6c739c40c6e1e691bc1d846119585a6bea14a4faf054",
	            "SandboxKey": "/var/run/docker/netns/3a400a933eab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-974821": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:df:19:aa:43:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7bc427b9d0a76a9b65d9c7350c64fa7b62c15a0e5ba62c34a9ee658b9c3973dc",
	                    "EndpointID": "fcd49a1d7a931c51670bb1639475ceebb2f5e6078df77f57455465bfc6426ab5",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-974821",
	                        "b26f988fd889"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-974821 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:55 UTC |
	| start   | -p newest-cni-235733 --memory=2200 --alsologtostderr   | newest-cni-235733            | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=crio              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:55 UTC | 01 Apr 25 20:55 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:55:51
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:55:51.058989  368392 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:55:51.059115  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059125  368392 out.go:358] Setting ErrFile to fd 2...
	I0401 20:55:51.059129  368392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:55:51.059321  368392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:55:51.059942  368392 out.go:352] Setting JSON to false
	I0401 20:55:51.061160  368392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":5897,"bootTime":1743535054,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:55:51.061260  368392 start.go:139] virtualization: kvm guest
	I0401 20:55:51.063851  368392 out.go:177] * [newest-cni-235733] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:55:51.065265  368392 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:55:51.065294  368392 notify.go:220] Checking for updates...
	I0401 20:55:51.067422  368392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:55:51.068384  368392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:55:51.069267  368392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:55:51.070173  368392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:55:51.071206  368392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:55:51.072571  368392 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072680  368392 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072792  368392 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:55:51.072915  368392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:55:51.095817  368392 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:55:51.095892  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.143133  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.134138641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.143250  368392 docker.go:318] overlay module found
	I0401 20:55:51.144967  368392 out.go:177] * Using the docker driver based on user configuration
	I0401 20:55:51.146008  368392 start.go:297] selected driver: docker
	I0401 20:55:51.146024  368392 start.go:901] validating driver "docker" against <nil>
	I0401 20:55:51.146036  368392 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:55:51.146923  368392 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:55:51.198901  368392 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 20:55:51.190131502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:55:51.199094  368392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0401 20:55:51.199131  368392 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0401 20:55:51.199479  368392 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0401 20:55:51.201405  368392 out.go:177] * Using Docker driver with root privileges
	I0401 20:55:51.202338  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:55:51.202417  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:55:51.202433  368392 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 20:55:51.202524  368392 start.go:340] cluster config:
	{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:55:51.203656  368392 out.go:177] * Starting "newest-cni-235733" primary control-plane node in "newest-cni-235733" cluster
	I0401 20:55:51.204742  368392 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:55:51.205935  368392 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:55:51.207028  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:55:51.207058  368392 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:55:51.207064  368392 cache.go:56] Caching tarball of preloaded images
	I0401 20:55:51.207135  368392 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:55:51.207163  368392 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:55:51.207171  368392 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:55:51.207259  368392 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json ...
	I0401 20:55:51.207276  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json: {Name:mk10fae3f4d17094cdcb12dcfa676dc28e751b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:55:51.227186  368392 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:55:51.227203  368392 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:55:51.227224  368392 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:55:51.227260  368392 start.go:360] acquireMachinesLock for newest-cni-235733: {Name:mk2bd08d0a606a11f78441bb216ae502c7382305 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:55:51.227360  368392 start.go:364] duration metric: took 83.063µs to acquireMachinesLock for "newest-cni-235733"
	I0401 20:55:51.227399  368392 start.go:93] Provisioning new machine with config: &{Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:55:51.227477  368392 start.go:125] createHost starting for "" (driver="docker")
	I0401 20:55:51.229133  368392 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0401 20:55:51.229333  368392 start.go:159] libmachine.API.Create for "newest-cni-235733" (driver="docker")
	I0401 20:55:51.229364  368392 client.go:168] LocalClient.Create starting
	I0401 20:55:51.229457  368392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem
	I0401 20:55:51.229488  368392 main.go:141] libmachine: Decoding PEM data...
	I0401 20:55:51.229503  368392 main.go:141] libmachine: Parsing certificate...
	I0401 20:55:51.229555  368392 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem
	I0401 20:55:51.229574  368392 main.go:141] libmachine: Decoding PEM data...
	I0401 20:55:51.229584  368392 main.go:141] libmachine: Parsing certificate...
	I0401 20:55:51.229932  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0401 20:55:51.246026  368392 cli_runner.go:211] docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0401 20:55:51.246082  368392 network_create.go:284] running [docker network inspect newest-cni-235733] to gather additional debugging logs...
	I0401 20:55:51.246099  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733
	W0401 20:55:51.262257  368392 cli_runner.go:211] docker network inspect newest-cni-235733 returned with exit code 1
	I0401 20:55:51.262288  368392 network_create.go:287] error running [docker network inspect newest-cni-235733]: docker network inspect newest-cni-235733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-235733 not found
	I0401 20:55:51.262319  368392 network_create.go:289] output of [docker network inspect newest-cni-235733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-235733 not found
	
	** /stderr **
	I0401 20:55:51.262459  368392 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:55:51.279358  368392 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
	I0401 20:55:51.280014  368392 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-81fe12fae94d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:cc:45:d3:a7:72} reservation:<nil>}
	I0401 20:55:51.280866  368392 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d1f8fe59a39e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:0e:15:5e:6b:fd:d1} reservation:<nil>}
	I0401 20:55:51.281444  368392 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b666aa65b1b8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:ed:87:e7:d7:c9} reservation:<nil>}
	I0401 20:55:51.282361  368392 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f21ee0}
	I0401 20:55:51.282390  368392 network_create.go:124] attempt to create docker network newest-cni-235733 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0401 20:55:51.282432  368392 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-235733 newest-cni-235733
	I0401 20:55:51.335951  368392 network_create.go:108] docker network newest-cni-235733 192.168.85.0/24 created
	I0401 20:55:51.335989  368392 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-235733" container
	I0401 20:55:51.336054  368392 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0401 20:55:51.353323  368392 cli_runner.go:164] Run: docker volume create newest-cni-235733 --label name.minikube.sigs.k8s.io=newest-cni-235733 --label created_by.minikube.sigs.k8s.io=true
	I0401 20:55:51.371940  368392 oci.go:103] Successfully created a docker volume newest-cni-235733
	I0401 20:55:51.372002  368392 cli_runner.go:164] Run: docker run --rm --name newest-cni-235733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-235733 --entrypoint /usr/bin/test -v newest-cni-235733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0401 20:55:51.827512  368392 oci.go:107] Successfully prepared a docker volume newest-cni-235733
	I0401 20:55:51.827553  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:55:51.827576  368392 kic.go:194] Starting extracting preloaded images to volume ...
	I0401 20:55:51.827640  368392 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-235733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0401 20:55:56.433713  368392 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-235733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.606030833s)
	I0401 20:55:56.433774  368392 kic.go:203] duration metric: took 4.606167722s to extract preloaded images to volume ...
	W0401 20:55:56.433934  368392 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0401 20:55:56.434054  368392 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0401 20:55:56.487877  368392 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-235733 --name newest-cni-235733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-235733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-235733 --network newest-cni-235733 --ip 192.168.85.2 --volume newest-cni-235733:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0401 20:55:56.781961  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Running}}
	I0401 20:55:56.801975  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:56.823816  368392 cli_runner.go:164] Run: docker exec newest-cni-235733 stat /var/lib/dpkg/alternatives/iptables
	I0401 20:55:56.871442  368392 oci.go:144] the created container "newest-cni-235733" has a running status.
	I0401 20:55:56.871479  368392 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa...
	I0401 20:55:56.943607  368392 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0401 20:55:56.965673  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:56.984405  368392 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0401 20:55:56.984436  368392 kic_runner.go:114] Args: [docker exec --privileged newest-cni-235733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0401 20:55:57.026627  368392 cli_runner.go:164] Run: docker container inspect newest-cni-235733 --format={{.State.Status}}
	I0401 20:55:57.045380  368392 machine.go:93] provisionDockerMachine start ...
	I0401 20:55:57.045499  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:55:57.065297  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:55:57.065540  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:55:57.065556  368392 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:55:57.066345  368392 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33982->127.0.0.1:33128: read: connection reset by peer
	I0401 20:56:00.201231  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-235733
	
	I0401 20:56:00.201265  368392 ubuntu.go:169] provisioning hostname "newest-cni-235733"
	I0401 20:56:00.201326  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.218955  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:00.219170  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:00.219186  368392 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-235733 && echo "newest-cni-235733" | sudo tee /etc/hostname
	I0401 20:56:00.364461  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-235733
	
	I0401 20:56:00.364536  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.382885  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:00.383163  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:00.383188  368392 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-235733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-235733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-235733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:56:00.513782  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:56:00.513823  368392 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:56:00.513858  368392 ubuntu.go:177] setting up certificates
	I0401 20:56:00.513870  368392 provision.go:84] configureAuth start
	I0401 20:56:00.513928  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:00.531771  368392 provision.go:143] copyHostCerts
	I0401 20:56:00.531843  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:56:00.531857  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:56:00.531926  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:56:00.532047  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:56:00.532057  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:56:00.532101  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:56:00.532213  368392 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:56:00.532225  368392 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:56:00.532261  368392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:56:00.532352  368392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.newest-cni-235733 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-235733]
	I0401 20:56:00.841413  368392 provision.go:177] copyRemoteCerts
	I0401 20:56:00.841482  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:56:00.841519  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:00.859012  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:00.954422  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:56:00.976662  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0401 20:56:00.997804  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:56:01.019589  368392 provision.go:87] duration metric: took 505.701108ms to configureAuth
	I0401 20:56:01.019637  368392 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:56:01.019807  368392 config.go:182] Loaded profile config "newest-cni-235733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:56:01.019899  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.037734  368392 main.go:141] libmachine: Using SSH client type: native
	I0401 20:56:01.037948  368392 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I0401 20:56:01.037965  368392 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:56:01.259923  368392 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:56:01.259950  368392 machine.go:96] duration metric: took 4.214543608s to provisionDockerMachine
	I0401 20:56:01.259962  368392 client.go:171] duration metric: took 10.030591312s to LocalClient.Create
	I0401 20:56:01.259985  368392 start.go:167] duration metric: took 10.030653406s to libmachine.API.Create "newest-cni-235733"
	I0401 20:56:01.259996  368392 start.go:293] postStartSetup for "newest-cni-235733" (driver="docker")
	I0401 20:56:01.260012  368392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:56:01.260081  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:56:01.260140  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.278014  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.374588  368392 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:56:01.377642  368392 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:56:01.377683  368392 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:56:01.377691  368392 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:56:01.377696  368392 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:56:01.377712  368392 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:56:01.377786  368392 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:56:01.377882  368392 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:56:01.377978  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:56:01.385515  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:56:01.407351  368392 start.go:296] duration metric: took 147.337207ms for postStartSetup
	I0401 20:56:01.407668  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:01.425537  368392 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/config.json ...
	I0401 20:56:01.425801  368392 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:56:01.425842  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.442888  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.534314  368392 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:56:01.538232  368392 start.go:128] duration metric: took 10.310743416s to createHost
	I0401 20:56:01.538254  368392 start.go:83] releasing machines lock for "newest-cni-235733", held for 10.310879717s
	I0401 20:56:01.538315  368392 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-235733
	I0401 20:56:01.555159  368392 ssh_runner.go:195] Run: cat /version.json
	I0401 20:56:01.555215  368392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:56:01.555230  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.555260  368392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-235733
	I0401 20:56:01.572582  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.573374  368392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/newest-cni-235733/id_rsa Username:docker}
	I0401 20:56:01.740900  368392 ssh_runner.go:195] Run: systemctl --version
	I0401 20:56:01.745485  368392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:56:01.884976  368392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:56:01.889616  368392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:56:01.909087  368392 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:56:01.909171  368392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:56:01.936240  368392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0401 20:56:01.936271  368392 start.go:495] detecting cgroup driver to use...
	I0401 20:56:01.936302  368392 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:56:01.936359  368392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:56:01.950872  368392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:56:01.960607  368392 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:56:01.960650  368392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:56:01.972897  368392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:56:01.985549  368392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:56:02.062345  368392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:56:02.145739  368392 docker.go:233] disabling docker service ...
	I0401 20:56:02.145830  368392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:56:02.165178  368392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:56:02.175898  368392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:56:02.255098  368392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:56:02.339857  368392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:56:02.350977  368392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:56:02.368999  368392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:56:02.369065  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.378217  368392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:56:02.378284  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.387556  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.396151  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.405298  368392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:56:02.413977  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.422877  368392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.437362  368392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:56:02.446187  368392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:56:02.453840  368392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:56:02.461242  368392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:56:02.543750  368392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:56:02.655731  368392 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:56:02.655797  368392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:56:02.659253  368392 start.go:563] Will wait 60s for crictl version
	I0401 20:56:02.659301  368392 ssh_runner.go:195] Run: which crictl
	I0401 20:56:02.662408  368392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:56:02.695001  368392 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:56:02.695070  368392 ssh_runner.go:195] Run: crio --version
	I0401 20:56:02.730815  368392 ssh_runner.go:195] Run: crio --version
	I0401 20:56:02.767535  368392 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:56:02.768829  368392 cli_runner.go:164] Run: docker network inspect newest-cni-235733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:56:02.785723  368392 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:56:02.789150  368392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:56:02.801035  368392 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0401 20:56:02.802073  368392 kubeadm.go:883] updating cluster {Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:56:02.802191  368392 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:56:02.802244  368392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:56:02.868310  368392 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:56:02.868336  368392 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:56:02.868398  368392 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:56:02.900261  368392 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:56:02.900287  368392 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:56:02.900297  368392 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 crio true true} ...
	I0401 20:56:02.900421  368392 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-235733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:56:02.900499  368392 ssh_runner.go:195] Run: crio config
	I0401 20:56:02.943050  368392 cni.go:84] Creating CNI manager for ""
	I0401 20:56:02.943079  368392 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:56:02.943094  368392 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0401 20:56:02.943124  368392 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-235733 NodeName:newest-cni-235733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:56:02.943274  368392 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-235733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:56:02.943346  368392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:56:02.951730  368392 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:56:02.951797  368392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:56:02.959547  368392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:56:02.975630  368392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:56:02.991591  368392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I0401 20:56:03.007710  368392 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:56:03.010894  368392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:56:03.020411  368392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:56:03.093970  368392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:56:03.106151  368392 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733 for IP: 192.168.85.2
	I0401 20:56:03.106190  368392 certs.go:194] generating shared ca certs ...
	I0401 20:56:03.106211  368392 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.106378  368392 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:56:03.106442  368392 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:56:03.106462  368392 certs.go:256] generating profile certs ...
	I0401 20:56:03.106537  368392 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key
	I0401 20:56:03.106553  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt with IP's: []
	I0401 20:56:03.204839  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt ...
	I0401 20:56:03.204867  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.crt: {Name:mk2b69fd1306e9574a49b180a189491c14b919dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.205024  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key ...
	I0401 20:56:03.205034  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/client.key: {Name:mk3c51db9ab5b1dcb6ddd2277dc33090ae7db9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.205110  368392 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43
	I0401 20:56:03.205125  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0401 20:56:03.422662  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 ...
	I0401 20:56:03.422691  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43: {Name:mk378bd109209d6442c5c366c4811c1c8cc57546 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.422864  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43 ...
	I0401 20:56:03.422876  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43: {Name:mk8e12228d94a3c114518f48dee47afd45daf3df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.422955  368392 certs.go:381] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt.64faaa43 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt
	I0401 20:56:03.423062  368392 certs.go:385] copying /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key.64faaa43 -> /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key
	I0401 20:56:03.423122  368392 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key
	I0401 20:56:03.423137  368392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt with IP's: []
	I0401 20:56:03.572439  368392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt ...
	I0401 20:56:03.572465  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt: {Name:mk7e6c1669db6a08319c704f213e0c41e5626dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.572622  368392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key ...
	I0401 20:56:03.572637  368392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key: {Name:mk0f51bdc73ef3dd1180e988e0edb4d0bce3415f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:56:03.572813  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:56:03.572846  368392 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:56:03.572856  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:56:03.572876  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:56:03.572901  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:56:03.572922  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:56:03.572959  368392 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:56:03.573487  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:56:03.595736  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:56:03.616757  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:56:03.637910  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:56:03.659062  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:56:03.679531  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:56:03.700054  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:56:03.720914  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/newest-cni-235733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:56:03.741541  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:56:03.764200  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:56:03.787280  368392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:56:03.809685  368392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:56:03.825804  368392 ssh_runner.go:195] Run: openssl version
	I0401 20:56:03.830738  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:56:03.839046  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.842192  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.842253  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:56:03.848450  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:56:03.856851  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:56:03.865289  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.868736  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.868789  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:56:03.874847  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:56:03.883272  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:56:03.891739  368392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.895096  368392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.895152  368392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:56:03.901418  368392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:56:03.910088  368392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:56:03.913097  368392 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0401 20:56:03.913154  368392 kubeadm.go:392] StartCluster: {Name:newest-cni-235733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:newest-cni-235733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:56:03.913223  368392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:56:03.913271  368392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:56:03.946340  368392 cri.go:89] found id: ""
	I0401 20:56:03.946415  368392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:56:03.954894  368392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0401 20:56:03.962594  368392 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0401 20:56:03.962639  368392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0401 20:56:03.970078  368392 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0401 20:56:03.970096  368392 kubeadm.go:157] found existing configuration files:
	
	I0401 20:56:03.970130  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0401 20:56:03.977625  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0401 20:56:03.977667  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0401 20:56:03.985314  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0401 20:56:03.992775  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0401 20:56:03.992824  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0401 20:56:04.000385  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0401 20:56:04.007981  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0401 20:56:04.008029  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0401 20:56:04.015291  368392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0401 20:56:04.022995  368392 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0401 20:56:04.023054  368392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0401 20:56:04.030402  368392 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0401 20:56:04.067512  368392 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0401 20:56:04.067576  368392 kubeadm.go:310] [preflight] Running pre-flight checks
	I0401 20:56:04.083657  368392 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0401 20:56:04.083752  368392 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0401 20:56:04.083828  368392 kubeadm.go:310] OS: Linux
	I0401 20:56:04.083896  368392 kubeadm.go:310] CGROUPS_CPU: enabled
	I0401 20:56:04.083976  368392 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0401 20:56:04.084034  368392 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0401 20:56:04.084101  368392 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0401 20:56:04.084154  368392 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0401 20:56:04.084218  368392 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0401 20:56:04.084282  368392 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0401 20:56:04.084347  368392 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0401 20:56:04.084412  368392 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0401 20:56:04.137365  368392 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0401 20:56:04.137535  368392 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0401 20:56:04.137707  368392 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0401 20:56:04.144502  368392 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0401 20:56:04.148081  368392 out.go:235]   - Generating certificates and keys ...
	I0401 20:56:04.148163  368392 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0401 20:56:04.148256  368392 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0401 20:56:04.468367  368392 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0401 20:56:04.595348  368392 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0401 20:56:04.846696  368392 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0401 20:56:04.933291  368392 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0401 20:56:05.155272  368392 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0401 20:56:05.155421  368392 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-235733] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:56:05.379489  368392 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0401 20:56:05.379675  368392 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-235733] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0401 20:56:05.525019  368392 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0401 20:56:05.810103  368392 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0401 20:56:05.874624  368392 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0401 20:56:05.874709  368392 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0401 20:56:06.177958  368392 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0401 20:56:06.386480  368392 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0401 20:56:06.536853  368392 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0401 20:56:06.902813  368392 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0401 20:56:07.073460  368392 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0401 20:56:07.074022  368392 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0401 20:56:07.076399  368392 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0401 20:56:07.078643  368392 out.go:235]   - Booting up control plane ...
	I0401 20:56:07.078738  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0401 20:56:07.078814  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0401 20:56:07.078892  368392 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0401 20:56:07.087142  368392 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0401 20:56:07.092214  368392 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0401 20:56:07.092285  368392 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0401 20:56:07.173012  368392 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0401 20:56:07.173192  368392 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0401 20:56:07.673791  368392 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.913575ms
	I0401 20:56:07.673891  368392 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Apr 01 20:53:05 embed-certs-974821 crio[550]: time="2025-04-01 20:53:05.274218787Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=fde8b57e-2c6e-486e-9501-772b214cbfe6 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:05 embed-certs-974821 crio[550]: time="2025-04-01 20:53:05.274910229Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=75de7f7a-1c51-4cc1-8346-1bead13dfef7 name=/runtime.v1.ImageService/PullImage
	Apr 01 20:53:05 embed-certs-974821 crio[550]: time="2025-04-01 20:53:05.276033817Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:53:52 embed-certs-974821 crio[550]: time="2025-04-01 20:53:52.273505490Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=6244c4f5-37a8-47b6-8b53-d8d8d6991e4f name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:53:52 embed-certs-974821 crio[550]: time="2025-04-01 20:53:52.273812346Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=6244c4f5-37a8-47b6-8b53-d8d8d6991e4f name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:04 embed-certs-974821 crio[550]: time="2025-04-01 20:54:04.273916867Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=5be8893c-4ed4-4384-a00b-ebcaf6ddcc92 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:04 embed-certs-974821 crio[550]: time="2025-04-01 20:54:04.274223660Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=5be8893c-4ed4-4384-a00b-ebcaf6ddcc92 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:17 embed-certs-974821 crio[550]: time="2025-04-01 20:54:17.273635524Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=e2ce7736-89d3-4151-8815-74f3757b605c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:17 embed-certs-974821 crio[550]: time="2025-04-01 20:54:17.273895620Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=e2ce7736-89d3-4151-8815-74f3757b605c name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:31 embed-certs-974821 crio[550]: time="2025-04-01 20:54:31.273822696Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0f3dccdc-6a12-4027-bcdc-e4801291b285 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:31 embed-certs-974821 crio[550]: time="2025-04-01 20:54:31.274043834Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0f3dccdc-6a12-4027-bcdc-e4801291b285 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:44 embed-certs-974821 crio[550]: time="2025-04-01 20:54:44.274351574Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=0f59bb41-4b76-4070-805d-162bc93c855d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:44 embed-certs-974821 crio[550]: time="2025-04-01 20:54:44.274566731Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=0f59bb41-4b76-4070-805d-162bc93c855d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:56 embed-certs-974821 crio[550]: time="2025-04-01 20:54:56.274050262Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=c83fbfb3-3a55-4eff-9568-04a9f8b35754 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:54:56 embed-certs-974821 crio[550]: time="2025-04-01 20:54:56.274331352Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=c83fbfb3-3a55-4eff-9568-04a9f8b35754 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:08 embed-certs-974821 crio[550]: time="2025-04-01 20:55:08.273424464Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=17f1e121-eb27-4c29-927c-a88bdbd41e51 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:08 embed-certs-974821 crio[550]: time="2025-04-01 20:55:08.273706384Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=17f1e121-eb27-4c29-927c-a88bdbd41e51 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:22 embed-certs-974821 crio[550]: time="2025-04-01 20:55:22.274617988Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=7a0ec80a-4b0d-45b3-a560-f981c7816c78 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:22 embed-certs-974821 crio[550]: time="2025-04-01 20:55:22.274888126Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=7a0ec80a-4b0d-45b3-a560-f981c7816c78 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:37 embed-certs-974821 crio[550]: time="2025-04-01 20:55:37.274332981Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=bccc1554-b183-4954-a5c7-dfb8966f3b87 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:37 embed-certs-974821 crio[550]: time="2025-04-01 20:55:37.274567047Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=bccc1554-b183-4954-a5c7-dfb8966f3b87 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:48 embed-certs-974821 crio[550]: time="2025-04-01 20:55:48.274424798Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=d8c329c7-3102-4f17-b262-4b02956eef81 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:55:48 embed-certs-974821 crio[550]: time="2025-04-01 20:55:48.274711773Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d8c329c7-3102-4f17-b262-4b02956eef81 name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:56:03 embed-certs-974821 crio[550]: time="2025-04-01 20:56:03.274238752Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=accc6e6c-fb7d-4845-b8ad-67c9ba57390d name=/runtime.v1.ImageService/ImageStatus
	Apr 01 20:56:03 embed-certs-974821 crio[550]: time="2025-04-01 20:56:03.274540225Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=accc6e6c-fb7d-4845-b8ad-67c9ba57390d name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0c4be69226b22       f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5   17 minutes ago      Running             kube-proxy                1                   054a48bf8a57c       kube-proxy-gn6mh
	6709f6284d476       b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389   17 minutes ago      Running             kube-controller-manager   1                   68166a16e4ccf       kube-controller-manager-embed-certs-974821
	1b409b776938c       85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef   17 minutes ago      Running             kube-apiserver            1                   5a3a166087255       kube-apiserver-embed-certs-974821
	a9f1f681f3bf4       d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d   17 minutes ago      Running             kube-scheduler            1                   4fb08364de8f4       kube-scheduler-embed-certs-974821
	732a4bf5b37a1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 minutes ago      Running             etcd                      1                   d8b5cef371e62       etcd-embed-certs-974821
	
	
	==> describe nodes <==
	Name:               embed-certs-974821
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-974821
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=embed-certs-974821
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:34 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-974821
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:56:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:55:03 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:55:03 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:55:03 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:55:03 +0000   Tue, 01 Apr 2025 20:26:32 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-974821
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 28ebfe595ec94fb9a75839c7c4da9d65
	  System UUID:                3349392c-92f4-4067-91a2-749412d851aa
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-974821                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-bq54h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-embed-certs-974821             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-974821    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-gn6mh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-974821             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29m                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 29m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 29m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     29m                kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29m                node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node embed-certs-974821 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node embed-certs-974821 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node embed-certs-974821 event: Registered Node embed-certs-974821 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [732a4bf5b37a17d64428372c4b341ca0176e303c278397947fc37e81f445b747] <==
	{"level":"info","ts":"2025-04-01T20:39:03.347143Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-01T20:39:03.348433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-01T20:39:03.347178Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348580Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-01T20:39:03.348736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-04-01T20:39:04.920589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.306335ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571761152512035446 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:665 > success:<request_delete_range:<key:\"/registry/masterleases/192.168.94.2\" > > failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-04-01T20:39:04.921414Z","caller":"traceutil/trace.go:171","msg":"trace[478374922] transaction","detail":"{read_only:false; response_revision:701; number_of_response:1; }","duration":"174.148343ms","start":"2025-04-01T20:39:04.747247Z","end":"2025-04-01T20:39:04.921396Z","steps":["trace[478374922] 'process raft request'  (duration: 174.071396ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921615Z","caller":"traceutil/trace.go:171","msg":"trace[1294899020] linearizableReadLoop","detail":"{readStateIndex:873; appliedIndex:872; }","duration":"174.902577ms","start":"2025-04-01T20:39:04.746663Z","end":"2025-04-01T20:39:04.921566Z","steps":["trace[1294899020] 'read index received'  (duration: 981.565µs)","trace[1294899020] 'applied index is now lower than readState.Index'  (duration: 173.918021ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:04.921658Z","caller":"traceutil/trace.go:171","msg":"trace[1643816995] transaction","detail":"{read_only:false; response_revision:700; number_of_response:1; }","duration":"174.752569ms","start":"2025-04-01T20:39:04.746898Z","end":"2025-04-01T20:39:04.921650Z","steps":["trace[1643816995] 'process raft request'  (duration: 174.347461ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.921801Z","caller":"traceutil/trace.go:171","msg":"trace[214304335] transaction","detail":"{read_only:false; number_of_response:1; response_revision:699; }","duration":"175.517874ms","start":"2025-04-01T20:39:04.746273Z","end":"2025-04-01T20:39:04.921791Z","steps":["trace[214304335] 'compare'  (duration: 172.157301ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.921867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.179491ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-974821\" limit:1 ","response":"range_response_count:1 size:5750"}
	{"level":"info","ts":"2025-04-01T20:39:04.922390Z","caller":"traceutil/trace.go:171","msg":"trace[1175626099] range","detail":"{range_begin:/registry/minions/embed-certs-974821; range_end:; response_count:1; response_revision:701; }","duration":"175.735808ms","start":"2025-04-01T20:39:04.746639Z","end":"2025-04-01T20:39:04.922375Z","steps":["trace[1175626099] 'agreement among raft nodes before linearized reading'  (duration: 175.172297ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.922892Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.707137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" limit:1 ","response":"range_response_count:1 size:92298"}
	{"level":"info","ts":"2025-04-01T20:39:04.922963Z","caller":"traceutil/trace.go:171","msg":"trace[382725270] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:701; }","duration":"104.813727ms","start":"2025-04-01T20:39:04.818140Z","end":"2025-04-01T20:39:04.922954Z","steps":["trace[382725270] 'agreement among raft nodes before linearized reading'  (duration: 104.571539ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:04.923317Z","caller":"traceutil/trace.go:171","msg":"trace[1182439] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:701; }","duration":"104.889107ms","start":"2025-04-01T20:39:04.818419Z","end":"2025-04-01T20:39:04.923308Z","steps":["trace[1182439] 'agreement among raft nodes before linearized reading'  (duration: 104.87954ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-01T20:39:04.923503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.18834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-04-01T20:39:04.923557Z","caller":"traceutil/trace.go:171","msg":"trace[53470254] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:701; }","duration":"105.257596ms","start":"2025-04-01T20:39:04.818292Z","end":"2025-04-01T20:39:04.923549Z","steps":["trace[53470254] 'agreement among raft nodes before linearized reading'  (duration: 105.178511ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:39:37.619038Z","caller":"traceutil/trace.go:171","msg":"trace[512211353] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"105.547476ms","start":"2025-04-01T20:39:37.513466Z","end":"2025-04-01T20:39:37.619014Z","steps":["trace[512211353] 'process raft request'  (duration: 43.691695ms)","trace[512211353] 'compare'  (duration: 61.757597ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-01T20:39:37.620916Z","caller":"traceutil/trace.go:171","msg":"trace[1272640698] transaction","detail":"{read_only:false; response_revision:824; number_of_response:1; }","duration":"101.494988ms","start":"2025-04-01T20:39:37.519401Z","end":"2025-04-01T20:39:37.620896Z","steps":["trace[1272640698] 'process raft request'  (duration: 101.291053ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-01T20:49:03.370677Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":939}
	{"level":"info","ts":"2025-04-01T20:49:03.375303Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":939,"took":"4.360178ms","hash":2566575144,"current-db-size-bytes":1998848,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1998848,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-04-01T20:49:03.375354Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2566575144,"revision":939,"compact-revision":500}
	{"level":"info","ts":"2025-04-01T20:54:03.375560Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2025-04-01T20:54:03.378030Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1058,"took":"2.223493ms","hash":4211526562,"current-db-size-bytes":1998848,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1146880,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-04-01T20:54:03.378068Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":4211526562,"revision":1058,"compact-revision":939}
	
	
	==> kernel <==
	 20:56:12 up  1:38,  0 users,  load average: 1.13, 0.52, 0.88
	Linux embed-certs-974821 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [1b409b776938c7f6d6325283fe8d5f7d2038212e8bab65b45b30c12beae6f139] <==
	 > logger="UnhandledError"
	I0401 20:52:05.648868       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:54:04.648487       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:54:04.648583       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0401 20:54:05.650704       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:54:05.650707       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:54:05.650787       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:54:05.650819       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:54:05.651920       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:54:05.651942       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0401 20:55:05.652725       1 handler_proxy.go:99] no RequestInfo found in the context
	W0401 20:55:05.652753       1 handler_proxy.go:99] no RequestInfo found in the context
	E0401 20:55:05.652791       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0401 20:55:05.652810       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0401 20:55:05.653917       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:55:05.653949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6709f6284d476f9efda2e9d43e571a75efeb97855b385ce4b1586eaa4de4f1a9] <==
	E0401 20:50:38.930608       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:50:38.979398       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:08.936205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:08.986536       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:51:38.942414       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:51:38.993024       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:08.947656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:09.000718       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:52:38.952274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:52:39.006957       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:08.957882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:09.013601       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:53:38.963711       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:53:39.020553       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:08.969156       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:09.026725       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:54:38.975297       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:54:39.032971       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I0401 20:55:03.763120       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-974821"
	E0401 20:55:08.981064       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:09.039999       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:55:38.985826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:55:39.046631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0401 20:56:08.991782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0401 20:56:09.054688       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [0c4be69226b22952a80da0c17c51cbc7f4486bc715cbe15cc3dd88daecfaf452] <==
	I0401 20:39:06.072071       1 server_linux.go:66] "Using iptables proxy"
	I0401 20:39:06.448227       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0401 20:39:06.461903       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0401 20:39:06.641034       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0401 20:39:06.641193       1 server_linux.go:170] "Using iptables Proxier"
	I0401 20:39:06.661209       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0401 20:39:06.661731       1 server.go:497] "Version info" version="v1.32.2"
	I0401 20:39:06.661779       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:06.671952       1 config.go:105] "Starting endpoint slice config controller"
	I0401 20:39:06.673686       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0401 20:39:06.672521       1 config.go:329] "Starting node config controller"
	I0401 20:39:06.673736       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0401 20:39:06.672555       1 config.go:199] "Starting service config controller"
	I0401 20:39:06.673765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0401 20:39:06.774792       1 shared_informer.go:320] Caches are synced for service config
	I0401 20:39:06.774838       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0401 20:39:06.775459       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9f1f681f3bf4be0d5f99a181b4ddfe1efade3b57adf4f7e82926d6306363cec] <==
	I0401 20:39:02.378239       1 serving.go:386] Generated self-signed cert in-memory
	W0401 20:39:04.549023       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:04.549065       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:04.549076       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:04.549086       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:04.727215       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0401 20:39:04.727317       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0401 20:39:04.729809       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0401 20:39:04.729861       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:04.730096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0401 20:39:04.730177       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0401 20:39:04.842475       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 01 20:55:20 embed-certs-974821 kubelet[676]: E0401 20:55:20.378019     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:22 embed-certs-974821 kubelet[676]: E0401 20:55:22.275138     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:55:25 embed-certs-974821 kubelet[676]: E0401 20:55:25.378940     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:30 embed-certs-974821 kubelet[676]: E0401 20:55:30.274922     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540930274732457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:30 embed-certs-974821 kubelet[676]: E0401 20:55:30.274968     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540930274732457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:30 embed-certs-974821 kubelet[676]: E0401 20:55:30.379746     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:35 embed-certs-974821 kubelet[676]: E0401 20:55:35.380670     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:37 embed-certs-974821 kubelet[676]: E0401 20:55:37.274870     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:55:40 embed-certs-974821 kubelet[676]: E0401 20:55:40.275893     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540940275723113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:40 embed-certs-974821 kubelet[676]: E0401 20:55:40.275941     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540940275723113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:40 embed-certs-974821 kubelet[676]: E0401 20:55:40.381629     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:45 embed-certs-974821 kubelet[676]: E0401 20:55:45.383082     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:48 embed-certs-974821 kubelet[676]: E0401 20:55:48.274969     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:55:50 embed-certs-974821 kubelet[676]: E0401 20:55:50.276781     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540950276611295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:50 embed-certs-974821 kubelet[676]: E0401 20:55:50.276821     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540950276611295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:55:50 embed-certs-974821 kubelet[676]: E0401 20:55:50.384655     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:55:55 embed-certs-974821 kubelet[676]: E0401 20:55:55.386330     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:00 embed-certs-974821 kubelet[676]: E0401 20:56:00.277632     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540960277453380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:00 embed-certs-974821 kubelet[676]: E0401 20:56:00.277681     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540960277453380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:00 embed-certs-974821 kubelet[676]: E0401 20:56:00.387161     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:03 embed-certs-974821 kubelet[676]: E0401 20:56:03.274847     676 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: reading manifest v20250214-acbabc1a in docker.io/kindest/kindnetd: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-bq54h" podUID="f880d90a-5596-4ce4-b2e9-ab4094de1621"
	Apr 01 20:56:05 embed-certs-974821 kubelet[676]: E0401 20:56:05.388435     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Apr 01 20:56:10 embed-certs-974821 kubelet[676]: E0401 20:56:10.278592     676 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540970278380519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:10 embed-certs-974821 kubelet[676]: E0401 20:56:10.278641     676 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1743540970278380519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125696,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Apr 01 20:56:10 embed-certs-974821 kubelet[676]: E0401 20:56:10.390044     676 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-974821 -n embed-certs-974821
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-974821 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1 (74.001143ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qwn44 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qwn44:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  12m (x2 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  20m (x2 over 25m)  default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-8kp7j" not found
	Error from server (NotFound): pods "kindnet-bq54h" not found
	Error from server (NotFound): pods "metrics-server-f79f97bbb-nnhr5" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-86c6bf9756-x6nnb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-7779f9b69b-q2fjx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-974821 describe pod busybox coredns-668d6bf9bc-8kp7j kindnet-bq54h metrics-server-f79f97bbb-nnhr5 storage-provisioner dashboard-metrics-scraper-86c6bf9756-x6nnb kubernetes-dashboard-7779f9b69b-q2fjx: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (241.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (216.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-p4fvg" [ed27ed13-b1a7-4240-bb98-42799c4e74b8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0401 20:53:05.583047   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/auto-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:53:26.124315   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:53:45.468199   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:54:07.012675   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/calico-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:54:29.791802   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/custom-flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:54:53.252415   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:54:56.735319   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/enable-default-cni-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:55:37.515330   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/flannel-460236/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:55:37.710994   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/bridge-460236/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-04-01 20:55:47.111098696 +0000 UTC m=+4232.712030129
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-964633 describe po kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe po kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard: context deadline exceeded (1.99µs)
start_stop_delete_test.go:285: kubectl --context old-k8s-version-964633 describe po kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-964633 logs kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 logs kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard: context deadline exceeded (652ns)
start_stop_delete_test.go:285: kubectl --context old-k8s-version-964633 logs kubernetes-dashboard-cd95d586-p4fvg -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-964633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (268ns)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-964633 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-964633
helpers_test.go:235: (dbg) docker inspect old-k8s-version-964633:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	        "Created": "2025-04-01T20:25:51.557164575Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 352399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-01T20:38:53.587755812Z",
	            "FinishedAt": "2025-04-01T20:38:52.359374523Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/hosts",
	        "LogPath": "/var/lib/docker/containers/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6/ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6-json.log",
	        "Name": "/old-k8s-version-964633",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-964633:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-964633",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ed2d0d1c2b7e920b6faf6dae9ac3ef2128da72aa20bb32898d7017b9200dfff6",
	                "LowerDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b-init/diff:/var/lib/docker/overlay2/58ab0f969881f9dc36059731f89b7320a7f189f8480f6c78bc37388b422863d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b421b7651ef379232ab7786ffe2ead1877b1d5462c8ffcb5213b3203b251d58b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-964633",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-964633/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-964633",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-964633",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98507353cdf3ad29538d69a6c2ab371dc9afedd5474261071e73baebb06da200",
	            "SandboxKey": "/var/run/docker/netns/98507353cdf3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-964633": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:45:5d:ae:77:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fa1190968e91c2b4b46ed5001c6999dbffa85fccb349d7fe54ec6eb7dee75cd",
	                    "EndpointID": "97180c448aba15ca3cf07e1fc19eac60b297d564aac63d5f4b5b7521b5a4989c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-964633",
	                        "ed2d0d1c2b7e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-964633 logs -n 25: (1.046122584s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-460236 sudo cat                             | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo                                 | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo find                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p flannel-460236 sudo crio                            | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p flannel-460236                                      | flannel-460236               | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	| delete  | -p                                                     | disable-driver-mounts-564557 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC | 01 Apr 25 20:26 UTC |
	|         | disable-driver-mounts-564557                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:26 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-671514             | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-671514                  | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-671514                                   | no-preload-671514            | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-974821            | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-964633        | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-993330  | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-974821                 | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-974821                                  | embed-certs-974821           | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-964633             | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-964633                              | old-k8s-version-964633       | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-993330       | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC | 01 Apr 25 20:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-993330 | jenkins | v1.35.0 | 01 Apr 25 20:38 UTC |                     |
	|         | default-k8s-diff-port-993330                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 20:38:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 20:38:52.105725  347136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:52.105777  347136 machine.go:96] duration metric: took 4.546248046s to provisionDockerMachine
	I0401 20:38:52.105792  347136 start.go:293] postStartSetup for "no-preload-671514" (driver="docker")
	I0401 20:38:52.105806  347136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:52.105864  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:52.105906  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.129248  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.235223  347136 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:52.239186  347136 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:52.239231  347136 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:52.239244  347136 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:52.239252  347136 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:52.239264  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:52.239327  347136 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:52.239456  347136 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:52.239595  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:52.250609  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:52.360211  347136 start.go:296] duration metric: took 254.402357ms for postStartSetup
	I0401 20:38:52.360296  347136 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:52.360351  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.387676  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.491523  347136 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:52.496092  347136 fix.go:56] duration metric: took 5.344693031s for fixHost
	I0401 20:38:52.496122  347136 start.go:83] releasing machines lock for "no-preload-671514", held for 5.344749398s
	I0401 20:38:52.496189  347136 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-671514
	I0401 20:38:52.517531  347136 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:52.517580  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.517648  347136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:52.517707  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:52.537919  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.538649  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:52.645127  347136 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:52.736297  347136 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:52.881591  347136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:52.887010  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.896812  347136 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:52.896873  347136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:52.905846  347136 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:52.905865  347136 start.go:495] detecting cgroup driver to use...
	I0401 20:38:52.905896  347136 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:52.905938  347136 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:52.918607  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:52.930023  347136 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:52.930070  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:52.941984  347136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:52.953161  347136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:53.037477  347136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:53.138872  347136 docker.go:233] disabling docker service ...
	I0401 20:38:53.138945  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:53.158423  347136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:53.171926  347136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:53.269687  347136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:53.393413  347136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:53.477027  347136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:53.497246  347136 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:53.497310  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.507914  347136 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:53.507976  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.518788  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.529573  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.540440  347136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:53.549534  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.559313  347136 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.567905  347136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:53.578610  347136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:53.587658  347136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:53.597372  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:53.698689  347136 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:53.836550  347136 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:53.836611  347136 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:53.841661  347136 start.go:563] Will wait 60s for crictl version
	I0401 20:38:53.841725  347136 ssh_runner.go:195] Run: which crictl
	I0401 20:38:53.846721  347136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:53.899416  347136 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:53.899483  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:53.952152  347136 ssh_runner.go:195] Run: crio --version
	I0401 20:38:54.004010  352934 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:38:54.005923  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.005938  352934 out.go:358] Setting ErrFile to fd 2...
	I0401 20:38:54.005944  352934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:38:54.006257  352934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:38:54.007071  352934 out.go:352] Setting JSON to false
	I0401 20:38:54.008365  352934 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4880,"bootTime":1743535054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:38:54.008473  352934 start.go:139] virtualization: kvm guest
	I0401 20:38:54.009995  347136 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:54.010067  352934 out.go:177] * [default-k8s-diff-port-993330] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:38:54.011694  352934 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:38:54.011712  352934 notify.go:220] Checking for updates...
	I0401 20:38:54.014145  352934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:38:54.015895  352934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.024127  352934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:38:54.025658  352934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:38:54.027828  352934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:38:54.030319  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:54.031226  352934 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:38:54.070845  352934 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:38:54.070960  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.133073  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.122997904 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.133166  352934 docker.go:318] overlay module found
	I0401 20:38:54.135111  352934 out.go:177] * Using the docker driver based on existing profile
	I0401 20:38:54.136307  352934 start.go:297] selected driver: docker
	I0401 20:38:54.136318  352934 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.136401  352934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:38:54.137155  352934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:38:54.199415  352934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:78 SystemTime:2025-04-01 20:38:54.186560463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:38:54.199852  352934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0401 20:38:54.199898  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.199941  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.199981  352934 start.go:340] cluster config:
	{Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.202194  352934 out.go:177] * Starting "default-k8s-diff-port-993330" primary control-plane node in "default-k8s-diff-port-993330" cluster
	I0401 20:38:54.203578  352934 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 20:38:54.204902  352934 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0401 20:38:54.206239  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.206288  352934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 20:38:54.206290  352934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 20:38:54.206297  352934 cache.go:56] Caching tarball of preloaded images
	I0401 20:38:54.206483  352934 preload.go:172] Found /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0401 20:38:54.206500  352934 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
	I0401 20:38:54.206609  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.230387  352934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0401 20:38:54.230407  352934 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0401 20:38:54.230421  352934 cache.go:230] Successfully downloaded all kic artifacts
	I0401 20:38:54.230449  352934 start.go:360] acquireMachinesLock for default-k8s-diff-port-993330: {Name:mk06aff0f25d0080818cb1ab5e643246575bb967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0401 20:38:54.230516  352934 start.go:364] duration metric: took 43.047µs to acquireMachinesLock for "default-k8s-diff-port-993330"
	I0401 20:38:54.230538  352934 start.go:96] Skipping create...Using existing machine configuration
	I0401 20:38:54.230548  352934 fix.go:54] fixHost starting: 
	I0401 20:38:54.230815  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.253099  352934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-993330: state=Stopped err=<nil>
	W0401 20:38:54.253122  352934 fix.go:138] unexpected machine state, will restart: <nil>
	I0401 20:38:54.255111  352934 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-993330" ...
	I0401 20:38:54.011605  347136 cli_runner.go:164] Run: docker network inspect no-preload-671514 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:54.041213  347136 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:54.049326  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.064336  347136 kubeadm.go:883] updating cluster {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:54.064466  347136 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:54.064514  347136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:54.115208  347136 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:54.115234  347136 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:54.115244  347136 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:54.115361  347136 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-671514 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:54.115437  347136 ssh_runner.go:195] Run: crio config
	I0401 20:38:54.178193  347136 cni.go:84] Creating CNI manager for ""
	I0401 20:38:54.178238  347136 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:54.178256  347136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:54.178289  347136 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-671514 NodeName:no-preload-671514 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:54.178457  347136 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-671514"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:54.178530  347136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:54.199512  347136 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:54.199574  347136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:54.209629  347136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0401 20:38:54.230923  347136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:54.251534  347136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0401 20:38:54.278110  347136 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:54.281967  347136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:54.294866  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:54.389642  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:54.412054  347136 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514 for IP: 192.168.76.2
	I0401 20:38:54.412081  347136 certs.go:194] generating shared ca certs ...
	I0401 20:38:54.412105  347136 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.412352  347136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:38:54.412421  347136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:38:54.412433  347136 certs.go:256] generating profile certs ...
	I0401 20:38:54.412560  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/client.key
	I0401 20:38:54.412672  347136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key.228ec789
	I0401 20:38:54.412732  347136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key
	I0401 20:38:54.412866  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:38:54.412906  347136 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:38:54.412921  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:38:54.412951  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:38:54.412982  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:38:54.413010  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:38:54.413066  347136 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:54.413998  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:38:54.440067  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:38:54.465329  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:38:54.494557  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:38:54.551370  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0401 20:38:54.581365  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0401 20:38:54.629398  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:38:54.652474  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/no-preload-671514/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:38:54.675343  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:38:54.697544  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:38:54.720631  347136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:38:54.743975  347136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:38:54.764403  347136 ssh_runner.go:195] Run: openssl version
	I0401 20:38:54.770164  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:38:54.778967  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782488  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.782536  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:38:54.788662  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:38:54.797231  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:38:54.806689  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810660  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.810715  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:38:54.817439  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:38:54.826613  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:38:54.835800  347136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840121  347136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.840185  347136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:38:54.849006  347136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:38:54.859346  347136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:38:54.864799  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:38:54.872292  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:38:54.879751  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:38:54.886458  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:38:54.893167  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:38:54.899638  347136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:38:54.906114  347136 kubeadm.go:392] StartCluster: {Name:no-preload-671514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:no-preload-671514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:38:54.906201  347136 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:38:54.906239  347136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:38:54.951940  347136 cri.go:89] found id: ""
	I0401 20:38:54.952000  347136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:38:54.960578  347136 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:38:54.960602  347136 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:38:54.960646  347136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:38:54.970053  347136 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:38:54.970572  347136 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-671514" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:54.970739  347136 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-671514" cluster setting kubeconfig missing "no-preload-671514" context setting]
	I0401 20:38:54.971129  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:54.972990  347136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:38:55.021631  347136 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0401 20:38:55.021668  347136 kubeadm.go:597] duration metric: took 61.060707ms to restartPrimaryControlPlane
	I0401 20:38:55.021677  347136 kubeadm.go:394] duration metric: took 115.573169ms to StartCluster
	I0401 20:38:55.021696  347136 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.021775  347136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:38:55.022611  347136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:38:55.022884  347136 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:38:55.023270  347136 config.go:182] Loaded profile config "no-preload-671514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:55.023240  347136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:38:55.023393  347136 addons.go:69] Setting storage-provisioner=true in profile "no-preload-671514"
	I0401 20:38:55.023403  347136 addons.go:69] Setting dashboard=true in profile "no-preload-671514"
	I0401 20:38:55.023420  347136 addons.go:238] Setting addon storage-provisioner=true in "no-preload-671514"
	I0401 20:38:55.023431  347136 addons.go:238] Setting addon dashboard=true in "no-preload-671514"
	W0401 20:38:55.023448  347136 addons.go:247] addon dashboard should already be in state true
	I0401 20:38:55.023455  347136 addons.go:69] Setting default-storageclass=true in profile "no-preload-671514"
	I0401 20:38:55.023472  347136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-671514"
	I0401 20:38:55.023482  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023499  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023428  347136 addons.go:69] Setting metrics-server=true in profile "no-preload-671514"
	I0401 20:38:55.023538  347136 addons.go:238] Setting addon metrics-server=true in "no-preload-671514"
	W0401 20:38:55.023550  347136 addons.go:247] addon metrics-server should already be in state true
	I0401 20:38:55.023576  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.023815  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.023975  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024000  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.024068  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.026917  347136 out.go:177] * Verifying Kubernetes components...
	I0401 20:38:55.029291  347136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:55.055781  347136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:38:55.055855  347136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:38:55.057061  347136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.057080  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:38:55.057138  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.057350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:38:55.057367  347136 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:38:55.057424  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.062918  347136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:38:55.065275  347136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:38:55.066480  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:38:55.066515  347136 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:38:55.066577  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.071145  347136 addons.go:238] Setting addon default-storageclass=true in "no-preload-671514"
	I0401 20:38:55.071200  347136 host.go:66] Checking if "no-preload-671514" exists ...
	I0401 20:38:55.071691  347136 cli_runner.go:164] Run: docker container inspect no-preload-671514 --format={{.State.Status}}
	I0401 20:38:55.083530  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.091553  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094122  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.094336  347136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.094354  347136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:38:55.094412  347136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-671514
	I0401 20:38:55.111336  347136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/no-preload-671514/id_rsa Username:docker}
	I0401 20:38:55.351041  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:38:55.351070  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:38:55.437350  347136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:38:55.519566  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:38:55.519592  347136 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:38:55.519813  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:38:55.525350  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:38:55.525376  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:38:55.525417  347136 node_ready.go:35] waiting up to 6m0s for node "no-preload-671514" to be "Ready" ...
	I0401 20:38:55.529286  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:38:55.619132  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:38:55.619161  347136 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:38:55.633068  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:38:55.633096  347136 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:38:55.723947  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:38:55.723973  347136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:38:55.735846  347136 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.735875  347136 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:38:55.823952  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:38:55.823983  347136 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:38:55.832856  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:38:55.844619  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:38:55.844646  347136 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:38:55.930714  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:38:55.930749  347136 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:38:55.948106  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:38:55.948132  347136 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:38:56.032557  347136 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:56.032584  347136 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:38:56.049457  347136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:53.256452  351594 cli_runner.go:164] Run: docker start embed-certs-974821
	I0401 20:38:53.591647  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:38:53.614453  351594 kic.go:430] container "embed-certs-974821" state is running.
	I0401 20:38:53.614804  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:53.647522  351594 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/config.json ...
	I0401 20:38:53.647770  351594 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.647842  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:53.682651  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.682960  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:53.682985  351594 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.683750  351594 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48874->127.0.0.1:33113: read: connection reset by peer
	I0401 20:38:56.817604  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.817648  351594 ubuntu.go:169] provisioning hostname "embed-certs-974821"
	I0401 20:38:56.817793  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:56.837276  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:56.837520  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:56.837557  351594 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-974821 && echo "embed-certs-974821" | sudo tee /etc/hostname
	I0401 20:38:56.985349  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-974821
	
	I0401 20:38:56.985437  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.003678  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.003886  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.003902  351594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-974821' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-974821/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-974821' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.138051  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.138083  351594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.138119  351594 ubuntu.go:177] setting up certificates
	I0401 20:38:57.138129  351594 provision.go:84] configureAuth start
	I0401 20:38:57.138183  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:57.160793  351594 provision.go:143] copyHostCerts
	I0401 20:38:57.160846  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.160861  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.160928  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.161033  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.161046  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.161073  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.161143  351594 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.161150  351594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.161173  351594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.161236  351594 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.embed-certs-974821 san=[127.0.0.1 192.168.94.2 embed-certs-974821 localhost minikube]
	I0401 20:38:57.342909  351594 provision.go:177] copyRemoteCerts
	I0401 20:38:57.342986  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.343039  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.366221  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:57.472015  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.495541  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0401 20:38:57.524997  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:57.549236  351594 provision.go:87] duration metric: took 411.092761ms to configureAuth
	I0401 20:38:57.549262  351594 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.549469  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:57.549578  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.568385  351594 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.568723  351594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0401 20:38:57.568748  351594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:57.895046  351594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:57.895076  351594 machine.go:96] duration metric: took 4.247292894s to provisionDockerMachine
	I0401 20:38:57.895090  351594 start.go:293] postStartSetup for "embed-certs-974821" (driver="docker")
	I0401 20:38:57.895103  351594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:57.895197  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:57.895246  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:57.915083  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:53.559610  351961 cli_runner.go:164] Run: docker start old-k8s-version-964633
	I0401 20:38:53.842845  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:38:53.869722  351961 kic.go:430] container "old-k8s-version-964633" state is running.
	I0401 20:38:53.870198  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:53.898052  351961 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/config.json ...
	I0401 20:38:53.898321  351961 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:53.898397  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:53.927685  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:53.927896  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:53.927903  351961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:53.928642  351961 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48734->127.0.0.1:33118: read: connection reset by peer
	I0401 20:38:57.062029  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.062064  351961 ubuntu.go:169] provisioning hostname "old-k8s-version-964633"
	I0401 20:38:57.062123  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.080716  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.080924  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.080937  351961 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-964633 && echo "old-k8s-version-964633" | sudo tee /etc/hostname
	I0401 20:38:57.240578  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-964633
	
	I0401 20:38:57.240662  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.260618  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.260889  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.260907  351961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-964633' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-964633/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-964633' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:57.401787  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:57.401828  351961 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:57.401871  351961 ubuntu.go:177] setting up certificates
	I0401 20:38:57.401886  351961 provision.go:84] configureAuth start
	I0401 20:38:57.401949  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:57.422490  351961 provision.go:143] copyHostCerts
	I0401 20:38:57.422554  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:57.422569  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:57.422640  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:57.422791  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:57.422806  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:57.422844  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:57.422949  351961 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:57.422960  351961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:57.422994  351961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:57.423199  351961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-964633 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-964633]
	I0401 20:38:57.571252  351961 provision.go:177] copyRemoteCerts
	I0401 20:38:57.571297  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:57.571327  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.591959  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:57.694089  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:57.716992  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0401 20:38:57.743592  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0401 20:38:57.770813  351961 provision.go:87] duration metric: took 368.908054ms to configureAuth
	I0401 20:38:57.770843  351961 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:57.771048  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:38:57.771183  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:57.799733  351961 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.799933  351961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0401 20:38:57.799954  351961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.118005  351961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.118036  351961 machine.go:96] duration metric: took 4.219703731s to provisionDockerMachine
	I0401 20:38:58.118048  351961 start.go:293] postStartSetup for "old-k8s-version-964633" (driver="docker")
	I0401 20:38:58.118078  351961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.118141  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.118190  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.157345  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.260528  351961 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.263954  351961 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.263997  351961 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.264009  351961 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.264016  351961 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.264031  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.264134  351961 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.264236  351961 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.264349  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.273031  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.295080  351961 start.go:296] duration metric: took 177.019024ms for postStartSetup
	I0401 20:38:58.295156  351961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.295211  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.313972  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:54.256421  352934 cli_runner.go:164] Run: docker start default-k8s-diff-port-993330
	I0401 20:38:54.526683  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:38:54.551292  352934 kic.go:430] container "default-k8s-diff-port-993330" state is running.
	I0401 20:38:54.551997  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:54.571770  352934 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/config.json ...
	I0401 20:38:54.571962  352934 machine.go:93] provisionDockerMachine start ...
	I0401 20:38:54.572029  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:54.593544  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:54.593785  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:54.593801  352934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0401 20:38:54.594444  352934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41354->127.0.0.1:33123: read: connection reset by peer
	I0401 20:38:57.729265  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.729305  352934 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-993330"
	I0401 20:38:57.729371  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.751913  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.752222  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.752257  352934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993330 && echo "default-k8s-diff-port-993330" | sudo tee /etc/hostname
	I0401 20:38:57.901130  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993330
	
	I0401 20:38:57.901261  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:57.930504  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:57.930800  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:57.930823  352934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993330/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0401 20:38:58.075023  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0401 20:38:58.075050  352934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20506-16361/.minikube CaCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20506-16361/.minikube}
	I0401 20:38:58.075102  352934 ubuntu.go:177] setting up certificates
	I0401 20:38:58.075114  352934 provision.go:84] configureAuth start
	I0401 20:38:58.075164  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:58.094214  352934 provision.go:143] copyHostCerts
	I0401 20:38:58.094278  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem, removing ...
	I0401 20:38:58.094297  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem
	I0401 20:38:58.094685  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/cert.pem (1123 bytes)
	I0401 20:38:58.094794  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem, removing ...
	I0401 20:38:58.094805  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem
	I0401 20:38:58.094831  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/key.pem (1675 bytes)
	I0401 20:38:58.094936  352934 exec_runner.go:144] found /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem, removing ...
	I0401 20:38:58.094952  352934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem
	I0401 20:38:58.094980  352934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20506-16361/.minikube/ca.pem (1078 bytes)
	I0401 20:38:58.095049  352934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993330 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-993330 localhost minikube]
	I0401 20:38:58.234766  352934 provision.go:177] copyRemoteCerts
	I0401 20:38:58.234846  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0401 20:38:58.234897  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.268985  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.366478  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0401 20:38:58.390337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0401 20:38:58.413285  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0401 20:38:58.452125  352934 provision.go:87] duration metric: took 376.99619ms to configureAuth
	I0401 20:38:58.452155  352934 ubuntu.go:193] setting minikube options for container-runtime
	I0401 20:38:58.452388  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:38:58.452502  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.478039  352934 main.go:141] libmachine: Using SSH client type: native
	I0401 20:38:58.478248  352934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0401 20:38:58.478261  352934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0401 20:38:58.803667  352934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0401 20:38:58.803689  352934 machine.go:96] duration metric: took 4.231713518s to provisionDockerMachine
	I0401 20:38:58.803702  352934 start.go:293] postStartSetup for "default-k8s-diff-port-993330" (driver="docker")
	I0401 20:38:58.803715  352934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0401 20:38:58.803766  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0401 20:38:58.803807  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:58.830281  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.937600  352934 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.942153  352934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.942192  352934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.942202  352934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.942210  352934 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.942230  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.942291  352934 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.942386  352934 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.942518  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.956334  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.983879  352934 start.go:296] duration metric: took 180.163771ms for postStartSetup
	I0401 20:38:58.983960  352934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.983991  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.002575  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:58.014896  351594 ssh_runner.go:195] Run: cat /etc/os-release
	I0401 20:38:58.018005  351594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0401 20:38:58.018039  351594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0401 20:38:58.018050  351594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0401 20:38:58.018056  351594 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0401 20:38:58.018065  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/addons for local assets ...
	I0401 20:38:58.018122  351594 filesync.go:126] Scanning /home/jenkins/minikube-integration/20506-16361/.minikube/files for local assets ...
	I0401 20:38:58.018217  351594 filesync.go:149] local asset: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem -> 231632.pem in /etc/ssl/certs
	I0401 20:38:58.018329  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0401 20:38:58.029594  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:38:58.058013  351594 start.go:296] duration metric: took 162.909313ms for postStartSetup
	I0401 20:38:58.058074  351594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:38:58.058104  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.078753  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.170455  351594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.175254  351594 fix.go:56] duration metric: took 4.940602474s for fixHost
	I0401 20:38:58.175281  351594 start.go:83] releasing machines lock for "embed-certs-974821", held for 4.9406487s
	I0401 20:38:58.175350  351594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-974821
	I0401 20:38:58.195824  351594 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.195883  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.195887  351594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.195941  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:38:58.216696  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.217554  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:38:58.317364  351594 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.402372  351594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.467580  351594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.472889  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.483808  351594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.483870  351594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.492557  351594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.492581  351594 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.492612  351594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.492656  351594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.503906  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.514753  351594 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.514797  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.530532  351594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.545218  351594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.634533  351594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:58.740609  351594 docker.go:233] disabling docker service ...
	I0401 20:38:58.740675  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:58.757811  351594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:58.769316  351594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:58.927560  351594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.017887  351594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.036043  351594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.062452  351594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:38:59.062511  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.072040  351594 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.072092  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.081316  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.090717  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.100633  351594 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.109276  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.119081  351594 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.132776  351594 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.144942  351594 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.157415  351594 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.170244  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.262627  351594 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.410410  351594 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.410477  351594 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.413774  351594 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.413822  351594 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.416816  351594 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.467099  351594 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.467174  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.507883  351594 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.575644  351594 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:58.418440  351961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:58.424362  351961 fix.go:56] duration metric: took 4.887880817s for fixHost
	I0401 20:38:58.424445  351961 start.go:83] releasing machines lock for "old-k8s-version-964633", held for 4.88798766s
	I0401 20:38:58.424546  351961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-964633
	I0401 20:38:58.452849  351961 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:58.452925  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.453154  351961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:58.453255  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:38:58.476968  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.480861  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:38:58.656620  351961 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:58.660863  351961 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:58.811060  351961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:58.820632  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.832745  351961 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:58.832809  351961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:58.843596  351961 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:58.843621  351961 start.go:495] detecting cgroup driver to use...
	I0401 20:38:58.843648  351961 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:58.843694  351961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:58.863375  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:58.874719  351961 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:58.874781  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:58.887671  351961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:58.897952  351961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:58.999694  351961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.092443  351961 docker.go:233] disabling docker service ...
	I0401 20:38:59.092514  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.104492  351961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.116744  351961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.228815  351961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:38:59.333394  351961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:38:59.348540  351961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:38:59.367380  351961 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0401 20:38:59.367456  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.378637  351961 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:38:59.378701  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.389089  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.398629  351961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:38:59.408282  351961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:38:59.416890  351961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:38:59.427052  351961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:38:59.436642  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.518454  351961 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:38:59.657852  351961 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:38:59.657924  351961 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:38:59.665839  351961 start.go:563] Will wait 60s for crictl version
	I0401 20:38:59.665887  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:38:59.669105  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:38:59.708980  351961 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:38:59.709049  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.779522  351961 ssh_runner.go:195] Run: crio --version
	I0401 20:38:59.821313  351961 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0401 20:38:58.132557  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:38:58.349953  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.8301036s)
	I0401 20:39:00.160568  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.631238812s)
	I0401 20:39:00.329074  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.496168303s)
	I0401 20:39:00.329117  347136 addons.go:479] Verifying addon metrics-server=true in "no-preload-671514"
	I0401 20:39:00.549528  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:00.564597  347136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.515099679s)
	I0401 20:39:00.566257  347136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-671514 addons enable metrics-server
	
	I0401 20:39:00.567767  347136 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0401 20:38:59.102229  352934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0401 20:38:59.106376  352934 fix.go:56] duration metric: took 4.875824459s for fixHost
	I0401 20:38:59.106403  352934 start.go:83] releasing machines lock for "default-k8s-diff-port-993330", held for 4.875877227s
	I0401 20:38:59.106467  352934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993330
	I0401 20:38:59.137666  352934 ssh_runner.go:195] Run: cat /version.json
	I0401 20:38:59.137721  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.137765  352934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0401 20:38:59.137838  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:38:59.164165  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.179217  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:38:59.261548  352934 ssh_runner.go:195] Run: systemctl --version
	I0401 20:38:59.348234  352934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0401 20:38:59.496358  352934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0401 20:38:59.501275  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.510535  352934 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0401 20:38:59.510618  352934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0401 20:38:59.521808  352934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0401 20:38:59.521883  352934 start.go:495] detecting cgroup driver to use...
	I0401 20:38:59.521929  352934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0401 20:38:59.521992  352934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0401 20:38:59.539597  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0401 20:38:59.557100  352934 docker.go:217] disabling cri-docker service (if available) ...
	I0401 20:38:59.557171  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0401 20:38:59.572388  352934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0401 20:38:59.586298  352934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0401 20:38:59.683279  352934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0401 20:38:59.775691  352934 docker.go:233] disabling docker service ...
	I0401 20:38:59.775764  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0401 20:38:59.787868  352934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0401 20:38:59.800876  352934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0401 20:38:59.904858  352934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0401 20:39:00.007211  352934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0401 20:39:00.019327  352934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0401 20:39:00.042921  352934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0401 20:39:00.042978  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.060613  352934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0401 20:39:00.060683  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.073546  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.084243  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.094331  352934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0401 20:39:00.108709  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.124148  352934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.138637  352934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0401 20:39:00.151200  352934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0401 20:39:00.163128  352934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0401 20:39:00.177441  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.308549  352934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0401 20:39:00.657013  352934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0401 20:39:00.657071  352934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0401 20:39:00.662239  352934 start.go:563] Will wait 60s for crictl version
	I0401 20:39:00.662306  352934 ssh_runner.go:195] Run: which crictl
	I0401 20:39:00.666702  352934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0401 20:39:00.714088  352934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0401 20:39:00.714165  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.773706  352934 ssh_runner.go:195] Run: crio --version
	I0401 20:39:00.860255  352934 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.24.6 ...
	I0401 20:38:59.576927  351594 cli_runner.go:164] Run: docker network inspect embed-certs-974821 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.596266  351594 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.600170  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.610682  351594 kubeadm.go:883] updating cluster {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.610789  351594 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:38:59.610830  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.675301  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.675323  351594 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:38:59.675370  351594 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.709665  351594 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:38:59.709691  351594 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:38:59.709700  351594 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.32.2 crio true true} ...
	I0401 20:38:59.709867  351594 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-974821 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:38:59.709948  351594 ssh_runner.go:195] Run: crio config
	I0401 20:38:59.774069  351594 cni.go:84] Creating CNI manager for ""
	I0401 20:38:59.774094  351594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:38:59.774109  351594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:38:59.774135  351594 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-974821 NodeName:embed-certs-974821 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:38:59.774315  351594 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-974821"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:38:59.774384  351594 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:38:59.783346  351594 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:38:59.783405  351594 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:38:59.791915  351594 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0401 20:38:59.809157  351594 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:38:59.830198  351594 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0401 20:38:59.866181  351594 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:38:59.869502  351594 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.880701  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:38:59.988213  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:00.002261  351594 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821 for IP: 192.168.94.2
	I0401 20:39:00.002294  351594 certs.go:194] generating shared ca certs ...
	I0401 20:39:00.002318  351594 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.002493  351594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:00.002551  351594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:00.002565  351594 certs.go:256] generating profile certs ...
	I0401 20:39:00.002694  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/client.key
	I0401 20:39:00.002770  351594 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key.9ef4ba6e
	I0401 20:39:00.002821  351594 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key
	I0401 20:39:00.003111  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:00.003192  351594 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:00.003203  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:00.003234  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:00.003269  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:00.003302  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:00.003360  351594 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:00.004109  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:00.043414  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:00.086922  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:00.131018  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:00.199071  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0401 20:39:00.250948  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:00.299580  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:00.340427  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/embed-certs-974821/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:00.371787  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:00.405208  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:00.450777  351594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:00.475915  351594 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:00.493330  351594 ssh_runner.go:195] Run: openssl version
	I0401 20:39:00.498599  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:00.508753  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513352  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.513426  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:00.523178  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:00.535753  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:00.548198  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553063  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.553119  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:00.562612  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:00.575635  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:00.588254  351594 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592610  351594 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.592674  351594 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:00.602558  351594 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:00.615003  351594 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:00.621769  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:00.631718  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:00.640716  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:00.648071  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:00.656537  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:00.665200  351594 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:00.672896  351594 kubeadm.go:392] StartCluster: {Name:embed-certs-974821 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-974821 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:00.673024  351594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:00.673084  351594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:00.766526  351594 cri.go:89] found id: ""
	I0401 20:39:00.766583  351594 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:00.783725  351594 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:00.783748  351594 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:00.783804  351594 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:00.847802  351594 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:00.848533  351594 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-974821" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.848902  351594 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-974821" cluster setting kubeconfig missing "embed-certs-974821" context setting]
	I0401 20:39:00.849559  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.851726  351594 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:00.864296  351594 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0401 20:39:00.864336  351594 kubeadm.go:597] duration metric: took 80.580617ms to restartPrimaryControlPlane
	I0401 20:39:00.864354  351594 kubeadm.go:394] duration metric: took 191.463145ms to StartCluster
	I0401 20:39:00.864375  351594 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.864449  351594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:00.866078  351594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:00.866359  351594 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:00.866582  351594 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:00.866695  351594 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-974821"
	I0401 20:39:00.866730  351594 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-974821"
	I0401 20:39:00.866763  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866789  351594 addons.go:69] Setting default-storageclass=true in profile "embed-certs-974821"
	I0401 20:39:00.866811  351594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-974821"
	I0401 20:39:00.867102  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867302  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.867499  351594 addons.go:69] Setting metrics-server=true in profile "embed-certs-974821"
	I0401 20:39:00.867522  351594 addons.go:238] Setting addon metrics-server=true in "embed-certs-974821"
	W0401 20:39:00.867531  351594 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:00.867563  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.867602  351594 addons.go:69] Setting dashboard=true in profile "embed-certs-974821"
	I0401 20:39:00.867665  351594 addons.go:238] Setting addon dashboard=true in "embed-certs-974821"
	W0401 20:39:00.867675  351594 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:00.867748  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.866768  351594 config.go:182] Loaded profile config "embed-certs-974821": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:00.868027  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868414  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.868860  351594 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:00.870326  351594 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:00.906509  351594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:00.906586  351594 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.906977  351594 addons.go:238] Setting addon default-storageclass=true in "embed-certs-974821"
	I0401 20:39:00.907012  351594 host.go:66] Checking if "embed-certs-974821" exists ...
	I0401 20:39:00.907458  351594 cli_runner.go:164] Run: docker container inspect embed-certs-974821 --format={{.State.Status}}
	I0401 20:39:00.907881  351594 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:00.907903  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:00.907948  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.909212  351594 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:00.909213  351594 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:00.569014  347136 addons.go:514] duration metric: took 5.545771269s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0401 20:39:00.861645  352934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993330 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:39:00.892893  352934 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0401 20:39:00.898812  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:00.914038  352934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:39:00.914211  352934 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 20:39:00.914281  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.001845  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.001870  352934 crio.go:433] Images already preloaded, skipping extraction
	I0401 20:39:01.001928  352934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:01.079561  352934 crio.go:514] all images are preloaded for cri-o runtime.
	I0401 20:39:01.079592  352934 cache_images.go:84] Images are preloaded, skipping loading
	I0401 20:39:01.079604  352934 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.32.2 crio true true} ...
	I0401 20:39:01.079735  352934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:01.079820  352934 ssh_runner.go:195] Run: crio config
	I0401 20:39:01.181266  352934 cni.go:84] Creating CNI manager for ""
	I0401 20:39:01.181283  352934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:01.181294  352934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:01.181313  352934 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993330 NodeName:default-k8s-diff-port-993330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0401 20:39:01.181431  352934 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993330"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:01.181486  352934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0401 20:39:01.196494  352934 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:01.196546  352934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:01.209119  352934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0401 20:39:01.231489  352934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:01.266192  352934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0401 20:39:01.287435  352934 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:01.292197  352934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:01.305987  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:01.409717  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.430576  352934 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330 for IP: 192.168.103.2
	I0401 20:39:01.430602  352934 certs.go:194] generating shared ca certs ...
	I0401 20:39:01.430622  352934 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:01.430799  352934 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:01.430868  352934 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:01.430882  352934 certs.go:256] generating profile certs ...
	I0401 20:39:01.430988  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/client.key
	I0401 20:39:01.431061  352934 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key.604428a1
	I0401 20:39:01.431116  352934 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key
	I0401 20:39:01.431248  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:01.431282  352934 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:01.431291  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:01.431320  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:01.431345  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:01.431375  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:01.431426  352934 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:01.432312  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:01.492228  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:01.531474  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:01.591214  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:01.646862  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0401 20:39:01.673390  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:01.696337  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:01.721680  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/default-k8s-diff-port-993330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0401 20:39:01.756071  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:01.779072  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:01.803739  352934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:01.830973  352934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:01.853698  352934 ssh_runner.go:195] Run: openssl version
	I0401 20:39:01.860789  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:01.869990  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873406  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.873466  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:01.879852  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:01.888495  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:01.897967  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901409  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.901490  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:01.908132  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:01.917981  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:01.929846  352934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935022  352934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.935082  352934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:01.944568  352934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:01.955161  352934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:01.959776  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:01.967922  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:01.974184  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:01.980155  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:01.986629  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:01.993055  352934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:01.999192  352934 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-993330 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:default-k8s-diff-port-993330 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:01.999274  352934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:01.999339  352934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:02.049294  352934 cri.go:89] found id: ""
	I0401 20:39:02.049371  352934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:02.061603  352934 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:02.061627  352934 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:02.061672  352934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:02.071486  352934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:02.072578  352934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993330" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.073083  352934 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993330" cluster setting kubeconfig missing "default-k8s-diff-port-993330" context setting]
	I0401 20:39:02.073890  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.076069  352934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:02.085167  352934 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0401 20:39:02.085198  352934 kubeadm.go:597] duration metric: took 23.565213ms to restartPrimaryControlPlane
	I0401 20:39:02.085207  352934 kubeadm.go:394] duration metric: took 86.023549ms to StartCluster
	I0401 20:39:02.085233  352934 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.085303  352934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:02.086751  352934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:02.086981  352934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:02.087055  352934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:02.087156  352934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087171  352934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.087194  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.087277  352934 config.go:182] Loaded profile config "default-k8s-diff-port-993330": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:39:02.087341  352934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087361  352934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993330"
	I0401 20:39:02.087661  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087716  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.087804  352934 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.087845  352934 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.087856  352934 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:02.087894  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088052  352934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-993330"
	I0401 20:39:02.088097  352934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-993330"
	W0401 20:39:02.088140  352934 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:02.088174  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.088393  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.088685  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.089041  352934 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:02.090870  352934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:02.116636  352934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-993330"
	I0401 20:39:02.116682  352934 host.go:66] Checking if "default-k8s-diff-port-993330" exists ...
	I0401 20:39:02.117105  352934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993330 --format={{.State.Status}}
	I0401 20:39:02.118346  352934 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:02.118443  352934 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:02.127274  352934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:02.127339  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:02.127357  352934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:02.127428  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.128779  352934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.128798  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:02.128846  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.129065  352934 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:00.910296  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:00.910308  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:00.910331  351594 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:00.910388  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.910310  351594 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:00.910464  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.936194  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.939226  351594 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:00.939253  351594 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:00.939302  351594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-974821
	I0401 20:39:00.955547  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.955989  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:00.995581  351594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/embed-certs-974821/id_rsa Username:docker}
	I0401 20:39:01.148209  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:01.148254  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:01.233150  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:01.233178  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:01.237979  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:01.238004  351594 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:01.245451  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:01.326103  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:01.330462  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:01.330484  351594 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:01.333439  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:01.333458  351594 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:01.432762  351594 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.432790  351594 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:01.440420  351594 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:01.464879  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:01.464912  351594 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:01.620343  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:01.620370  351594 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:01.626476  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:01.731058  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:01.731086  351594 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:01.840203  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:01.840234  351594 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:01.923226  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:01.923256  351594 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:01.946227  351594 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:01.946251  351594 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:01.967792  351594 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:38:59.822502  351961 cli_runner.go:164] Run: docker network inspect old-k8s-version-964633 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0401 20:38:59.859876  351961 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0401 20:38:59.864588  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:38:59.875731  351961 kubeadm.go:883] updating cluster {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0401 20:38:59.875830  351961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 20:38:59.875868  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:38:59.916903  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:38:59.916972  351961 ssh_runner.go:195] Run: which lz4
	I0401 20:38:59.924687  351961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0401 20:38:59.929326  351961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0401 20:38:59.929361  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0401 20:39:01.595956  351961 crio.go:462] duration metric: took 1.671314572s to copy over tarball
	I0401 20:39:01.596056  351961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0401 20:39:02.133262  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:02.133286  352934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:02.133360  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.174061  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.183470  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186828  352934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.186849  352934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:02.186839  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.186902  352934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993330
	I0401 20:39:02.221878  352934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/default-k8s-diff-port-993330/id_rsa Username:docker}
	I0401 20:39:02.357264  352934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:02.369894  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:02.418319  352934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:39:02.424368  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:02.424394  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:02.518463  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:02.518487  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:02.518908  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:02.552283  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:02.552311  352934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:02.625174  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:02.625203  352934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:02.630561  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:02.630585  352934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:02.754984  352934 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.755012  352934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0401 20:39:02.831957  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832016  352934 retry.go:31] will retry after 167.103605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0401 20:39:02.832502  352934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.832541  352934 retry.go:31] will retry after 331.737592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0401 20:39:02.844243  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:02.844284  352934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:02.845125  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:02.941398  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:02.941430  352934 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0401 20:39:03.000175  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:03.020897  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:03.020925  352934 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:03.049959  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:03.049990  352934 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:03.141305  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:03.141375  352934 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0401 20:39:03.164774  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:03.233312  352934 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:03.233345  352934 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:03.256933  352934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:06.674867  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.429316088s)
	I0401 20:39:06.674935  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.34880877s)
	I0401 20:39:06.675318  351594 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.234867378s)
	I0401 20:39:06.675347  351594 node_ready.go:35] waiting up to 6m0s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:39:06.779842  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.153328436s)
	I0401 20:39:06.779881  351594 addons.go:479] Verifying addon metrics-server=true in "embed-certs-974821"
	I0401 20:39:06.886105  351594 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.918277142s)
	I0401 20:39:06.887414  351594 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-974821 addons enable metrics-server
	
	I0401 20:39:06.888540  351594 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0401 20:39:02.553791  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:05.029461  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:04.709726  351961 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.113631874s)
	I0401 20:39:04.709778  351961 crio.go:469] duration metric: took 3.113777603s to extract the tarball
	I0401 20:39:04.709789  351961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0401 20:39:04.806594  351961 ssh_runner.go:195] Run: sudo crictl images --output json
	I0401 20:39:04.861422  351961 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0401 20:39:04.861451  351961 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0401 20:39:04.861512  351961 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.861543  351961 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.861553  351961 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.861581  351961 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.861609  351961 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.861642  351961 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.861654  351961 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0401 20:39:04.861801  351961 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863284  351961 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:04.863664  351961 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:04.863712  351961 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:04.863738  351961 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:04.863662  351961 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:04.863893  351961 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:04.863915  351961 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:04.864371  351961 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0401 20:39:05.123716  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.130469  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.151746  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0401 20:39:05.181431  351961 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0401 20:39:05.181505  351961 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.181544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.183293  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.183573  351961 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0401 20:39:05.183645  351961 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.183713  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.194122  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.206768  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.231458  351961 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0401 20:39:05.231520  351961 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0401 20:39:05.231565  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.231699  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.249694  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.334087  351961 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0401 20:39:05.334138  351961 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.334211  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.334360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.362019  351961 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0401 20:39:05.362081  351961 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.362124  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.362276  351961 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0401 20:39:05.362361  351961 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.362413  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.369588  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.369603  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.381417  351961 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0401 20:39:05.381482  351961 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.381544  351961 ssh_runner.go:195] Run: which crictl
	I0401 20:39:05.464761  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.464910  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.465076  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.549955  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0401 20:39:05.550175  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.550207  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.550179  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.550247  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0401 20:39:05.550360  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.550376  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772125  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0401 20:39:05.772249  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.772301  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0401 20:39:05.772404  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0401 20:39:05.772507  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0401 20:39:05.772598  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.772692  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0401 20:39:05.854551  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0401 20:39:05.866611  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0401 20:39:05.871030  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0401 20:39:05.877182  351961 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0401 20:39:05.877257  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0401 20:39:05.933567  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0401 20:39:05.983883  351961 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0401 20:39:06.108361  351961 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:06.281713  351961 cache_images.go:92] duration metric: took 1.420243788s to LoadCachedImages
	W0401 20:39:06.281833  351961 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20506-16361/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0401 20:39:06.281852  351961 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 crio true true} ...
	I0401 20:39:06.281948  351961 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-964633 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0401 20:39:06.282127  351961 ssh_runner.go:195] Run: crio config
	I0401 20:39:06.346838  351961 cni.go:84] Creating CNI manager for ""
	I0401 20:39:06.346887  351961 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 20:39:06.346902  351961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0401 20:39:06.346941  351961 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-964633 NodeName:old-k8s-version-964633 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0401 20:39:06.347139  351961 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-964633"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0401 20:39:06.347231  351961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0401 20:39:06.359645  351961 binaries.go:44] Found k8s binaries, skipping transfer
	I0401 20:39:06.359730  351961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0401 20:39:06.372620  351961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0401 20:39:06.391931  351961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0401 20:39:06.408947  351961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0401 20:39:06.428949  351961 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0401 20:39:06.433831  351961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0401 20:39:06.449460  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:06.554432  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:06.576295  351961 certs.go:68] Setting up /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633 for IP: 192.168.85.2
	I0401 20:39:06.576319  351961 certs.go:194] generating shared ca certs ...
	I0401 20:39:06.576336  351961 certs.go:226] acquiring lock for ca certs: {Name:mkd05003433d6acd6ff5e2aab58050e1004ceb11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:06.576497  351961 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key
	I0401 20:39:06.576546  351961 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key
	I0401 20:39:06.576558  351961 certs.go:256] generating profile certs ...
	I0401 20:39:06.576669  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/client.key
	I0401 20:39:06.576732  351961 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key.4d8a9adb
	I0401 20:39:06.576777  351961 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key
	I0401 20:39:06.576941  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem (1338 bytes)
	W0401 20:39:06.576987  351961 certs.go:480] ignoring /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163_empty.pem, impossibly tiny 0 bytes
	I0401 20:39:06.577003  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca-key.pem (1679 bytes)
	I0401 20:39:06.577042  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/ca.pem (1078 bytes)
	I0401 20:39:06.577080  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/cert.pem (1123 bytes)
	I0401 20:39:06.577112  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/certs/key.pem (1675 bytes)
	I0401 20:39:06.577202  351961 certs.go:484] found cert: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem (1708 bytes)
	I0401 20:39:06.577963  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0401 20:39:06.602653  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0401 20:39:06.647086  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0401 20:39:06.690813  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0401 20:39:06.713070  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0401 20:39:06.746377  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0401 20:39:06.778703  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0401 20:39:06.803718  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/old-k8s-version-964633/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0401 20:39:06.834308  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0401 20:39:06.866056  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/certs/23163.pem --> /usr/share/ca-certificates/23163.pem (1338 bytes)
	I0401 20:39:06.894035  351961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/ssl/certs/231632.pem --> /usr/share/ca-certificates/231632.pem (1708 bytes)
	I0401 20:39:06.917385  351961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0401 20:39:06.947636  351961 ssh_runner.go:195] Run: openssl version
	I0401 20:39:06.953888  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0401 20:39:06.964321  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968171  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  1 19:46 /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.968226  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0401 20:39:06.974617  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0401 20:39:06.983475  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/23163.pem && ln -fs /usr/share/ca-certificates/23163.pem /etc/ssl/certs/23163.pem"
	I0401 20:39:06.992762  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996366  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  1 19:52 /usr/share/ca-certificates/23163.pem
	I0401 20:39:06.996428  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/23163.pem
	I0401 20:39:07.002911  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/23163.pem /etc/ssl/certs/51391683.0"
	I0401 20:39:07.010996  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/231632.pem && ln -fs /usr/share/ca-certificates/231632.pem /etc/ssl/certs/231632.pem"
	I0401 20:39:07.021397  351961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.025984  351961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  1 19:52 /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.026067  351961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/231632.pem
	I0401 20:39:07.035957  351961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/231632.pem /etc/ssl/certs/3ec20f2e.0"
	I0401 20:39:07.047833  351961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0401 20:39:07.052899  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0401 20:39:07.060002  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0401 20:39:07.066825  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0401 20:39:07.073034  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0401 20:39:07.079402  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0401 20:39:07.085484  351961 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0401 20:39:07.091397  351961 kubeadm.go:392] StartCluster: {Name:old-k8s-version-964633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-964633 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 20:39:07.091492  351961 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0401 20:39:07.091548  351961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0401 20:39:07.128264  351961 cri.go:89] found id: ""
	I0401 20:39:07.128349  351961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0401 20:39:07.140888  351961 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0401 20:39:07.140912  351961 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0401 20:39:07.140958  351961 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0401 20:39:07.153231  351961 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0401 20:39:07.154670  351961 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-964633" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.155719  351961 kubeconfig.go:62] /home/jenkins/minikube-integration/20506-16361/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-964633" cluster setting kubeconfig missing "old-k8s-version-964633" context setting]
	I0401 20:39:07.157163  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.158757  351961 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0401 20:39:07.168027  351961 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0401 20:39:07.168059  351961 kubeadm.go:597] duration metric: took 27.141864ms to restartPrimaryControlPlane
	I0401 20:39:07.168067  351961 kubeadm.go:394] duration metric: took 76.688394ms to StartCluster
	I0401 20:39:07.168080  351961 settings.go:142] acquiring lock: {Name:mk97b1c786280c2571e72d442bae7d86f342cc2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.168127  351961 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:39:07.169725  351961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/kubeconfig: {Name:mk2c8ba46a7915412f877cabac9093904e092601 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 20:39:07.170008  351961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0401 20:39:07.170125  351961 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0401 20:39:07.170223  351961 config.go:182] Loaded profile config "old-k8s-version-964633": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0401 20:39:07.170239  351961 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170242  351961 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170266  351961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-964633"
	I0401 20:39:07.170225  351961 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170274  351961 addons.go:69] Setting dashboard=true in profile "old-k8s-version-964633"
	I0401 20:39:07.170287  351961 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-964633"
	I0401 20:39:07.170295  351961 addons.go:238] Setting addon dashboard=true in "old-k8s-version-964633"
	W0401 20:39:07.170305  351961 addons.go:247] addon dashboard should already be in state true
	I0401 20:39:07.170370  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170317  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170271  351961 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-964633"
	W0401 20:39:07.170518  351961 addons.go:247] addon metrics-server should already be in state true
	I0401 20:39:07.170538  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.170635  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170752  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170790  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.170972  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.172169  351961 out.go:177] * Verifying Kubernetes components...
	I0401 20:39:07.173505  351961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0401 20:39:07.195280  351961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0401 20:39:07.195309  351961 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0401 20:39:07.196717  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0401 20:39:07.196841  351961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0401 20:39:07.196856  351961 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.196872  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0401 20:39:07.196915  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.196942  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.197394  351961 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-964633"
	I0401 20:39:07.197435  351961 host.go:66] Checking if "old-k8s-version-964633" exists ...
	I0401 20:39:07.197859  351961 cli_runner.go:164] Run: docker container inspect old-k8s-version-964633 --format={{.State.Status}}
	I0401 20:39:07.199010  351961 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0401 20:39:06.889586  351594 addons.go:514] duration metric: took 6.02301545s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0401 20:39:06.035393  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:08.049476  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.204308009s)
	I0401 20:39:08.049521  352934 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-993330"
	I0401 20:39:08.049607  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.04941057s)
	I0401 20:39:08.049656  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.884816314s)
	I0401 20:39:08.153809  352934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.89678194s)
	I0401 20:39:08.155169  352934 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993330 addons enable metrics-server
	
	I0401 20:39:08.156587  352934 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0401 20:39:07.199890  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0401 20:39:07.199903  351961 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0401 20:39:07.199941  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.234503  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.235163  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.237888  351961 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.237904  351961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0401 20:39:07.237966  351961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-964633
	I0401 20:39:07.247920  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.267742  351961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/old-k8s-version-964633/id_rsa Username:docker}
	I0401 20:39:07.287255  351961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0401 20:39:07.299956  351961 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:39:07.369975  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0401 20:39:07.370003  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0401 20:39:07.370256  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:07.370275  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0401 20:39:07.370375  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0401 20:39:07.375999  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:07.389489  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0401 20:39:07.389519  351961 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0401 20:39:07.392617  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0401 20:39:07.392649  351961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0401 20:39:07.428112  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0401 20:39:07.428142  351961 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0401 20:39:07.433897  351961 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.433992  351961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0401 20:39:07.455617  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0401 20:39:07.455648  351961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0401 20:39:07.476492  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.529951  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0401 20:39:07.529980  351961 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0401 20:39:07.536397  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.536442  351961 retry.go:31] will retry after 370.337463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.556425  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.556472  351961 retry.go:31] will retry after 235.723504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.561306  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0401 20:39:07.561336  351961 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0401 20:39:07.584704  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0401 20:39:07.584735  351961 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0401 20:39:07.625764  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0401 20:39:07.625798  351961 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0401 20:39:07.645378  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.645415  351961 retry.go:31] will retry after 255.777707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.649636  351961 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:07.649669  351961 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0401 20:39:07.671677  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:07.737362  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.737401  351961 retry.go:31] will retry after 262.88549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.792468  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:07.866562  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.866592  351961 retry.go:31] will retry after 533.454773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.901800  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:07.907022  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:07.980401  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.980441  351961 retry.go:31] will retry after 228.624656ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:07.988393  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:07.988424  351961 retry.go:31] will retry after 448.714243ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.000515  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.081285  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.081315  351961 retry.go:31] will retry after 447.290555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.209566  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.282910  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.282939  351961 retry.go:31] will retry after 345.008526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.157608  352934 addons.go:514] duration metric: took 6.070557386s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0401 20:39:08.420842  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:07.528498  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:10.028235  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:08.679057  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:11.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:08.400904  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:08.437284  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:08.472258  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.472309  351961 retry.go:31] will retry after 320.641497ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:08.510915  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.510944  351961 retry.go:31] will retry after 492.726701ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.529147  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:08.591983  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.592084  351961 retry.go:31] will retry after 465.236717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.628174  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:08.689124  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.689154  351961 retry.go:31] will retry after 943.995437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.793440  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:08.855206  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:08.855246  351961 retry.go:31] will retry after 720.227519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.004533  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:09.058355  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.065907  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.065942  351961 retry.go:31] will retry after 1.037966025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.117446  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.117479  351961 retry.go:31] will retry after 754.562948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.301005  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:09.576438  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:09.633510  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:09.635214  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.635244  351961 retry.go:31] will retry after 1.847480271s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:09.696503  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.696537  351961 retry.go:31] will retry after 1.037435117s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.872202  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:09.938840  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:09.938877  351961 retry.go:31] will retry after 1.127543746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.104125  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:10.166892  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.166930  351961 retry.go:31] will retry after 791.488522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.734957  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:10.793410  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.793444  351961 retry.go:31] will retry after 1.012309026s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.959155  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0401 20:39:11.016633  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.016669  351961 retry.go:31] will retry after 2.653496764s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.066845  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:11.124814  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.124847  351961 retry.go:31] will retry after 1.791931046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.301448  351961 node_ready.go:53] error getting node "old-k8s-version-964633": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-964633": dial tcp 192.168.85.2:8443: connect: connection refused
	I0401 20:39:11.483750  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0401 20:39:11.543399  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.543438  351961 retry.go:31] will retry after 1.223481684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.806367  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0401 20:39:11.864183  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:11.864221  351961 retry.go:31] will retry after 1.951915637s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:12.767684  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:12.917803  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0401 20:39:13.037405  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.037443  351961 retry.go:31] will retry after 3.340804626s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0401 20:39:13.137455  351961 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:13.137492  351961 retry.go:31] will retry after 1.845170825s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0401 20:39:10.921348  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.922070  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:12.029055  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:14.029334  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:16.528266  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:13.678285  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:15.678948  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:13.670763  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0401 20:39:13.816520  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0401 20:39:14.983231  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0401 20:39:16.378470  351961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0401 20:39:17.228294  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:18.134996  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.464190797s)
	I0401 20:39:18.137960  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.321398465s)
	I0401 20:39:18.137997  351961 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-964633"
	I0401 20:39:18.333702  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.350416291s)
	I0401 20:39:18.333724  351961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.955165189s)
	I0401 20:39:18.335497  351961 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-964633 addons enable metrics-server
	
	I0401 20:39:18.338389  351961 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0401 20:39:18.339702  351961 addons.go:514] duration metric: took 11.169580256s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0401 20:39:14.922389  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:17.422517  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:18.528645  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:21.028918  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:18.179007  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:20.679261  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:19.303490  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:21.802650  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:19.922052  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:22.421928  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:23.528755  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:25.528817  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:23.178561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:25.179370  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:27.678492  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:23.802992  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:26.303337  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:24.921257  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:26.921566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.921721  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:28.028278  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.029294  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:30.178068  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:32.178407  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:28.803030  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:30.803142  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:32.804506  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:31.421529  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:33.422314  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:32.528771  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:35.028310  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:34.678401  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:36.678436  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:34.820252  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:37.303538  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:35.921129  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.921575  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:37.029142  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.529041  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:39.178430  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:41.178761  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:39.803103  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:41.803218  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:39.921632  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.421978  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:42.028775  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:44.528465  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:43.678961  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:46.178802  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:43.805102  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:46.303301  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:44.921055  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:46.921300  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:47.028468  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:49.029516  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:51.528326  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:48.678166  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:50.678827  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:48.803449  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:51.303940  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:49.420997  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:51.421299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.921144  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:53.528537  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:56.028170  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:39:53.178385  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:55.678420  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:57.679098  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:39:53.802524  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.803593  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:58.303096  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:39:55.921434  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:57.921711  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:39:58.528054  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.528629  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:00.178311  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:02.678352  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:00.303306  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:02.303647  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:00.421483  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:02.421534  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:03.028408  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:05.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:04.678899  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:06.679157  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:04.303895  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:06.803026  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:04.421710  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:06.422190  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:08.921100  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:07.528908  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:10.028327  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:09.178223  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:11.179569  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:08.803438  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:11.303934  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:10.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:13.420981  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:12.029192  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:14.528262  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:16.528863  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:13.678318  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:15.678351  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:13.802740  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.802953  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:17.803604  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:15.421233  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:17.421572  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:19.028399  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:21.028986  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:18.178555  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.178847  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:22.678795  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:20.303070  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:22.803236  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:19.921330  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:21.921496  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:23.528700  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:26.028827  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:25.178198  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:27.178525  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:25.302929  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:27.803100  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:24.421920  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:26.921609  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:28.028880  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:30.528993  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:29.178683  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:31.678813  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:30.302947  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:32.303237  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:29.421343  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:31.920938  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.921570  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:33.029335  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:35.528263  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:33.678935  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:36.177990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:34.303597  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.803619  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:36.421535  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:38.921303  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:37.528464  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:39.528735  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:38.178316  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:40.678382  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:39.302825  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:41.803036  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:40.921448  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.921676  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:42.028624  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:44.528367  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:46.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:43.179726  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:45.678079  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:47.678864  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:44.303174  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:46.303380  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:45.421032  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:47.421476  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:49.028536  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:51.029147  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:50.178510  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:52.678038  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:48.803528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:51.303128  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:49.421550  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:51.421662  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.921436  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:53.528171  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:55.528359  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:54.678324  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:56.678950  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:53.803596  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:56.303846  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:40:55.921590  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:58.421035  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:40:57.528626  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.528836  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:01.528941  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:40:59.178418  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:01.178716  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:40:58.803255  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:01.303636  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:03.304018  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:00.421947  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:02.921538  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:04.029070  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:06.528978  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:03.178849  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.678455  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:05.803129  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:07.803516  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:05.421012  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:07.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:09.028641  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:11.528314  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:08.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.678669  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:10.303656  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:12.802863  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:09.422346  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:11.921506  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.921591  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:13.528414  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:16.028353  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:13.178173  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:15.178645  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:17.178978  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:14.803234  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:17.303832  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:16.421683  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.921735  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:18.029471  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:20.528285  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:19.678823  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:22.178464  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:19.803249  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.805282  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:21.421113  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:23.421834  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:22.528676  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:25.028614  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:24.678319  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:26.678918  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:24.303375  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:26.803671  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:25.921344  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.921528  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:27.528113  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.528360  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:31.528933  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:29.178874  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:31.678831  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:29.303894  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:31.803194  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:30.421566  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:32.921510  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:34.028783  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:36.528221  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:34.178921  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:36.679041  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:33.803493  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:36.303225  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:34.921588  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:37.422044  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:38.528309  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:40.529003  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:39.178121  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:41.178217  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:38.803230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:40.803589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:42.803627  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:39.921565  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:41.921707  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.922114  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:43.028345  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:45.028690  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:43.178994  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.678303  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:47.678398  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:45.303591  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:47.802784  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:46.421077  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:48.421358  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:47.528303  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:49.528358  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:51.528432  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:50.178878  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:52.678005  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:49.803053  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:51.803355  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:50.421484  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:52.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:53.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:56.028871  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:54.678573  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:56.678851  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:54.303589  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:56.304024  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:55.421149  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:57.422749  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:41:58.529130  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:01.029004  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:41:59.178913  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:01.678093  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:41:58.802967  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:00.803530  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:03.302974  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:41:59.921502  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:02.421235  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:03.528176  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:05.528974  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:03.678378  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.678612  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:05.303440  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:07.303517  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:04.421427  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:06.921211  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:08.028338  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:10.028605  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:08.177856  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:10.178695  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:12.677933  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:09.802768  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:12.303460  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:09.421339  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:11.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:13.921424  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:12.528546  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:15.028501  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:14.678148  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:17.177902  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:14.802922  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:17.302897  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:16.422172  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:18.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:17.528440  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:20.028178  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:19.178222  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:21.179024  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:19.803607  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:22.303402  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:20.921658  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:23.421335  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:22.028864  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:24.028909  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:26.528267  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:23.677923  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:25.678674  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:27.678990  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:24.303983  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:26.802541  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:25.421516  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:27.421596  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:28.528825  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.529079  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:30.178957  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:32.179097  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:28.802991  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:31.303608  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:29.422299  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:31.921278  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.921620  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:33.029096  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:35.528832  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:34.678305  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:37.178195  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:33.803315  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.303339  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:36.420752  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.421325  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:38.028458  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:40.028902  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:39.178476  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:41.178925  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:38.803143  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:41.303872  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:40.921457  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.921646  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:42.528579  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:44.528667  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:46.528898  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:43.678793  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:46.178954  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:43.802528  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:46.303539  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:45.421446  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:47.421741  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:48.529077  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:51.028550  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:48.678809  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:51.178540  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:48.802746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:50.803086  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:53.303060  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:49.421822  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:51.921340  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.921364  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:53.528495  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529339  347136 node_ready.go:53] node "no-preload-671514" has status "Ready":"False"
	I0401 20:42:55.529381  347136 node_ready.go:38] duration metric: took 4m0.003842971s for node "no-preload-671514" to be "Ready" ...
	I0401 20:42:55.531459  347136 out.go:201] 
	W0401 20:42:55.532809  347136 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:42:55.532827  347136 out.go:270] * 
	W0401 20:42:55.533842  347136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:42:55.535186  347136 out.go:201] 
	I0401 20:42:53.678561  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.679289  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:42:55.803263  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:57.803303  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:42:56.420956  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:42:58.421583  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:00.921332  352934 node_ready.go:53] node "default-k8s-diff-port-993330" has status "Ready":"False"
	I0401 20:43:02.418904  352934 node_ready.go:38] duration metric: took 4m0.00050867s for node "default-k8s-diff-port-993330" to be "Ready" ...
	I0401 20:43:02.420942  352934 out.go:201] 
	W0401 20:43:02.422232  352934 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:02.422249  352934 out.go:270] * 
	W0401 20:43:02.423128  352934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:02.424510  352934 out.go:201] 
	I0401 20:42:58.178720  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.679009  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:00.303699  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:02.803746  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:03.178558  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:05.678714  351594 node_ready.go:53] node "embed-certs-974821" has status "Ready":"False"
	I0401 20:43:06.678965  351594 node_ready.go:38] duration metric: took 4m0.00359519s for node "embed-certs-974821" to be "Ready" ...
	I0401 20:43:06.681158  351594 out.go:201] 
	W0401 20:43:06.682593  351594 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:06.682613  351594 out.go:270] * 
	W0401 20:43:06.683511  351594 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:06.684798  351594 out.go:201] 
	I0401 20:43:05.303230  351961 node_ready.go:53] node "old-k8s-version-964633" has status "Ready":"False"
	I0401 20:43:07.302678  351961 node_ready.go:38] duration metric: took 4m0.00268599s for node "old-k8s-version-964633" to be "Ready" ...
	I0401 20:43:07.304489  351961 out.go:201] 
	W0401 20:43:07.305731  351961 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: context deadline exceeded
	W0401 20:43:07.305770  351961 out.go:270] * 
	W0401 20:43:07.306663  351961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0401 20:43:07.308253  351961 out.go:201] 
	
	
	==> CRI-O <==
	Apr 01 20:52:54 old-k8s-version-964633 crio[545]: time="2025-04-01 20:52:54.990885616Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=d797eef5-65bb-43ab-8a4f-82dee3a258ef name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:53:06 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:06.990847283Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=07f57fc3-f1d0-4a60-a0f8-9235dc4796e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:53:06 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:06.991068762Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=07f57fc3-f1d0-4a60-a0f8-9235dc4796e8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:53:20 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:20.990863485Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=08a4b59a-199f-417b-97aa-d62fc770dc36 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:53:20 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:20.991145911Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=08a4b59a-199f-417b-97aa-d62fc770dc36 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:53:20 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:20.991804351Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=384be203-e96d-411c-b2c6-ccde90a02e87 name=/runtime.v1alpha2.ImageService/PullImage
	Apr 01 20:53:20 old-k8s-version-964633 crio[545]: time="2025-04-01 20:53:20.992845304Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:54:03 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:03.990818102Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=67239974-603a-4124-9083-c19b20a31f4c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:03 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:03.991102479Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=67239974-603a-4124-9083-c19b20a31f4c name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:11.961667445Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=f5867cbc-91bc-4567-b78f-958324355b79 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:11.961906544Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:4a1c4b21597c1b4415bdbecb28a3296c6b5e23ca4f9feeb599860a1dac6a0108 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:688049,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f5867cbc-91bc-4567-b78f-958324355b79 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:18 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:18.990713114Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=57321863-b63b-4b8a-a9b7-b02c5eb7c339 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:18 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:18.990967297Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=57321863-b63b-4b8a-a9b7-b02c5eb7c339 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:30 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:30.990758133Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=a0c0e5b0-819d-466d-bb4f-018a3fd91021 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:30 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:30.991075842Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=a0c0e5b0-819d-466d-bb4f-018a3fd91021 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:44 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:44.990806879Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=ca98aae1-6bbd-4c0f-a3b4-7fd85d69331e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:44 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:44.991090435Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=ca98aae1-6bbd-4c0f-a3b4-7fd85d69331e name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:58 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:58.990762492Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3b6f1f01-5b82-467f-8768-e01c4af5f219 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:54:58 old-k8s-version-964633 crio[545]: time="2025-04-01 20:54:58.991022230Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3b6f1f01-5b82-467f-8768-e01c4af5f219 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:11.990840935Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=56173c64-c880-4b85-a598-8457bb4d0b16 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:11 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:11.991064765Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=56173c64-c880-4b85-a598-8457bb4d0b16 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:26 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:26.990927413Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=3cbafb63-325b-41ba-b133-84a3bcf39cd1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:26 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:26.991197556Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=3cbafb63-325b-41ba-b133-84a3bcf39cd1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:38 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:38.990826458Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250214-acbabc1a" id=56c7704f-d42c-44aa-8ab9-d1c470ff3e6b name=/runtime.v1alpha2.ImageService/ImageStatus
	Apr 01 20:55:38 old-k8s-version-964633 crio[545]: time="2025-04-01 20:55:38.991068497Z" level=info msg="Image docker.io/kindest/kindnetd:v20250214-acbabc1a not found" id=56c7704f-d42c-44aa-8ab9-d1c470ff3e6b name=/runtime.v1alpha2.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6e2a15624e6b       10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc   16 minutes ago      Running             kube-proxy                0                   d79aac48145ed       kube-proxy-vb8ks
	476cadc498ed3       ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99   16 minutes ago      Running             kube-apiserver            0                   a0f2a56e33baf       kube-apiserver-old-k8s-version-964633
	1cf26e38ac1c6       0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934   16 minutes ago      Running             etcd                      0                   b5c714ec70c88       etcd-old-k8s-version-964633
	e1f3c07569c92       3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899   16 minutes ago      Running             kube-scheduler            0                   b0dee5245ff96       kube-scheduler-old-k8s-version-964633
	a5bc89e701040       b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080   16 minutes ago      Running             kube-controller-manager   0                   a0fa04b1b1602       kube-controller-manager-old-k8s-version-964633
	
	
	==> describe nodes <==
	Name:               old-k8s-version-964633
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-964633
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=73c6e1c927350a51068882397e0642f8dfb63f2a
	                    minikube.k8s.io/name=old-k8s-version-964633
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_01T20_26_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Apr 2025 20:26:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-964633
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Apr 2025 20:55:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Apr 2025 20:54:51 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Apr 2025 20:54:51 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Apr 2025 20:54:51 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 01 Apr 2025 20:54:51 +0000   Tue, 01 Apr 2025 20:26:17 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-964633
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 496e4a312fcb4e188c28b44d27ba4111
	  System UUID:                b6833a70-aaa0-48ad-8ca9-62cc3e7ff289
	  Boot ID:                    998ee032-5d07-42e5-839c-f756579cd457
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-old-k8s-version-964633                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-rmrss                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-old-k8s-version-964633             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-old-k8s-version-964633    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-vb8ks                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-old-k8s-version-964633             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  29m (x5 over 29m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x5 over 29m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x5 over 29m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 16m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x8 over 16m)  kubelet     Node old-k8s-version-964633 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +0.449515] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[ +12.597246] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a 7d 80 58 6c 04 08 06
	[  +0.000711] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 bf db e2 a3 5f 08 06
	[  +7.845356] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[Apr 1 20:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 3f 3e 00 a5 1c 08 06
	[ +20.323175] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +0.638468] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 5e 38 fd 47 e7 a9 08 06
	[  +7.023939] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	[ +12.985251] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 51 bc 34 44 0d 08 06
	[  +0.000445] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 cf 85 1a 8e a5 08 06
	[  +5.338672] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 d5 ae e5 6c ae 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ba 25 24 8c ac a9 08 06
	
	
	==> etcd [1cf26e38ac1c6604c953475ca04f80ac9e1430c2d45615035dcca537258ed713] <==
	2025-04-01 20:51:59.690139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:09.690175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:19.690139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:29.690067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:39.690126 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:49.690143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:52:59.690046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:09.690103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:19.690058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:29.690114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:39.690095 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:49.690057 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:53:59.690131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:09.690105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:14.368584 I | mvcc: store.index: compact 1001
	2025-04-01 20:54:14.369101 I | mvcc: finished scheduled compaction at 1001 (took 278.061µs)
	2025-04-01 20:54:19.690133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:29.690064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:39.690162 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:49.690068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:54:59.690010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:55:09.690082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:55:19.690055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:55:29.690092 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2025-04-01 20:55:39.690043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:55:48 up  1:38,  0 users,  load average: 0.41, 0.34, 0.83
	Linux old-k8s-version-964633 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [476cadc498ed38467dee6e6bd14670115232b713370264319c7e5a56ecaeac7d] <==
	I0401 20:52:32.225320       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:52:32.225327       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:53:09.679348       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:53:09.679397       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:53:09.679406       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:53:40.122067       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:53:40.122119       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:53:40.122131       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0401 20:54:17.576904       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:54:17.576948       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:54:17.576955       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:54:18.250178       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:54:18.250251       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:54:18.250259       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:54:51.572901       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:54:51.572938       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:54:51.572946       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0401 20:55:18.250436       1 handler_proxy.go:102] no RequestInfo found in the context
	E0401 20:55:18.250487       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0401 20:55:18.250495       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0401 20:55:25.516452       1 client.go:360] parsed scheme: "passthrough"
	I0401 20:55:25.516509       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0401 20:55:25.516519       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [a5bc89e701040e08d72357e3dac6043fa2051845c4876d8d4c98324eb1a2f4d5] <==
	E0401 20:51:16.541728       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:51:30.582099       1 request.go:655] Throttling request took 1.048712404s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0401 20:51:31.433318       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:51:47.043450       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:52:03.083632       1 request.go:655] Throttling request took 1.048676489s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0401 20:52:03.934804       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:52:17.545122       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:52:35.585230       1 request.go:655] Throttling request took 1.048681098s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta1?timeout=32s
	W0401 20:52:36.436346       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:52:48.046618       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:53:08.086584       1 request.go:655] Throttling request took 1.048569765s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0401 20:53:08.937372       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:53:18.548370       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:53:40.587533       1 request.go:655] Throttling request took 1.048725136s, request: GET:https://192.168.85.2:8443/apis/batch/v1beta1?timeout=32s
	W0401 20:53:41.438871       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:53:49.049844       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:54:13.089194       1 request.go:655] Throttling request took 1.04849575s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0401 20:54:13.940501       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:54:19.552894       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:54:45.590892       1 request.go:655] Throttling request took 1.048701607s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0401 20:54:46.442408       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:54:50.054780       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0401 20:55:18.092564       1 request.go:655] Throttling request took 1.048464472s, request: GET:https://192.168.85.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0401 20:55:18.943615       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0401 20:55:20.556706       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [b6e2a15624e6bfb4518956b54ad139920c531d3fc7c23adccb5f26ae8087b4ae] <==
	I0401 20:26:43.259998       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:26:43.318328       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:26:43.349273       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:26:43.349451       1 server_others.go:185] Using iptables Proxier.
	I0401 20:26:43.349906       1 server.go:650] Version: v1.20.0
	I0401 20:26:43.351034       1 config.go:315] Starting service config controller
	I0401 20:26:43.351107       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:26:43.351164       1 config.go:224] Starting endpoint slice config controller
	I0401 20:26:43.356628       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:26:43.451303       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:26:43.456955       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0401 20:39:19.459621       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0401 20:39:19.459730       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0401 20:39:19.469176       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0401 20:39:19.469267       1 server_others.go:185] Using iptables Proxier.
	I0401 20:39:19.469492       1 server.go:650] Version: v1.20.0
	I0401 20:39:19.469980       1 config.go:224] Starting endpoint slice config controller
	I0401 20:39:19.469997       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0401 20:39:19.470025       1 config.go:315] Starting service config controller
	I0401 20:39:19.470030       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0401 20:39:19.570148       1 shared_informer.go:247] Caches are synced for service config 
	I0401 20:39:19.570204       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [e1f3c07569c92c3a8447517fe4a29b9a1107cefce6ec8dec3438e2043596f976] <==
	E0401 20:26:22.051414       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:22.051526       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:22.922830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0401 20:26:22.955835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0401 20:26:23.011220       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0401 20:26:23.021829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0401 20:26:23.029700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0401 20:26:23.064263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0401 20:26:23.099742       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0401 20:26:23.120264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0401 20:26:23.332498       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0401 20:26:23.438632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0401 20:26:23.512784       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0401 20:26:23.649265       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0401 20:26:26.547552       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0401 20:39:13.424195       1 serving.go:331] Generated self-signed cert in-memory
	W0401 20:39:17.235518       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0401 20:39:17.235651       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0401 20:39:17.235691       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0401 20:39:17.235733       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0401 20:39:17.536554       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0401 20:39:17.536892       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0401 20:39:17.537005       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.537056       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0401 20:39:17.642397       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 01 20:54:18 old-k8s-version-964633 kubelet[986]: E0401 20:54:18.991227     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:54:22 old-k8s-version-964633 kubelet[986]: E0401 20:54:22.112703     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:27 old-k8s-version-964633 kubelet[986]: E0401 20:54:27.113265     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:30 old-k8s-version-964633 kubelet[986]: E0401 20:54:30.991381     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:54:32 old-k8s-version-964633 kubelet[986]: E0401 20:54:32.113873     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:37 old-k8s-version-964633 kubelet[986]: E0401 20:54:37.114468     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:42 old-k8s-version-964633 kubelet[986]: E0401 20:54:42.114983     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:44 old-k8s-version-964633 kubelet[986]: E0401 20:54:44.991450     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:54:47 old-k8s-version-964633 kubelet[986]: E0401 20:54:47.115432     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:52 old-k8s-version-964633 kubelet[986]: E0401 20:54:52.116141     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:57 old-k8s-version-964633 kubelet[986]: E0401 20:54:57.116799     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:54:58 old-k8s-version-964633 kubelet[986]: E0401 20:54:58.991289     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:55:02 old-k8s-version-964633 kubelet[986]: E0401 20:55:02.117537     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:07 old-k8s-version-964633 kubelet[986]: E0401 20:55:07.118258     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:11 old-k8s-version-964633 kubelet[986]: E0401 20:55:11.991258     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:55:12 old-k8s-version-964633 kubelet[986]: E0401 20:55:12.118812     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:17 old-k8s-version-964633 kubelet[986]: E0401 20:55:17.119540     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:22 old-k8s-version-964633 kubelet[986]: E0401 20:55:22.120260     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:26 old-k8s-version-964633 kubelet[986]: E0401 20:55:26.991484     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:55:27 old-k8s-version-964633 kubelet[986]: E0401 20:55:27.120899     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:32 old-k8s-version-964633 kubelet[986]: E0401 20:55:32.121515     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:37 old-k8s-version-964633 kubelet[986]: E0401 20:55:37.122240     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:38 old-k8s-version-964633 kubelet[986]: E0401 20:55:38.991339     986 pod_workers.go:191] Error syncing pod 96d81bdc-b456-4cb9-b8fd-996bdc90c878 ("kindnet-rmrss_kube-system(96d81bdc-b456-4cb9-b8fd-996bdc90c878)"), skipping: failed to "StartContainer" for "kindnet-cni" with ImagePullBackOff: "Back-off pulling image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Apr 01 20:55:42 old-k8s-version-964633 kubelet[986]: E0401 20:55:42.122907     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Apr 01 20:55:47 old-k8s-version-964633 kubelet[986]: E0401 20:55:47.123634     986 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-964633 -n old-k8s-version-964633
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-964633 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1 (69.511821ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5nmbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-5nmbk:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-5nmbk
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  16m (x1 over 16m)   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
	  Warning  FailedScheduling  16m (x10 over 25m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-74ff55c5b-5bjk4" not found
	Error from server (NotFound): pods "kindnet-rmrss" not found
	Error from server (NotFound): pods "metrics-server-9975d5f86-vj6lt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-4cckx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-cd95d586-p4fvg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-964633 describe pod busybox coredns-74ff55c5b-5bjk4 kindnet-rmrss metrics-server-9975d5f86-vj6lt storage-provisioner dashboard-metrics-scraper-8d5bb5db8-4cckx kubernetes-dashboard-cd95d586-p4fvg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (216.91s)

                                                
                                    

Test pass (275/323)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 34.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.34
12 TestDownloadOnly/v1.32.2/json-events 14.19
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.2
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.07
21 TestBinaryMirror 0.76
22 TestOffline 55.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 139
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 12.44
35 TestAddons/parallel/Registry 19.39
37 TestAddons/parallel/InspektorGadget 11.7
38 TestAddons/parallel/MetricsServer 6.68
40 TestAddons/parallel/CSI 56.37
41 TestAddons/parallel/Headlamp 18.52
42 TestAddons/parallel/CloudSpanner 6.48
43 TestAddons/parallel/LocalPath 57.18
44 TestAddons/parallel/NvidiaDevicePlugin 5.46
45 TestAddons/parallel/Yakd 10.86
46 TestAddons/parallel/AmdGpuDevicePlugin 5.82
47 TestAddons/StoppedEnableDisable 12.07
48 TestCertOptions 27.39
49 TestCertExpiration 220.25
51 TestForceSystemdFlag 33.14
52 TestForceSystemdEnv 35.07
54 TestKVMDriverInstallOrUpdate 5.02
58 TestErrorSpam/setup 22.62
59 TestErrorSpam/start 0.56
60 TestErrorSpam/status 0.86
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.56
63 TestErrorSpam/stop 1.34
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 42.21
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 35.46
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
75 TestFunctional/serial/CacheCmd/cache/add_local 2.03
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 39.03
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.32
86 TestFunctional/serial/LogsFileCmd 1.34
87 TestFunctional/serial/InvalidService 3.79
89 TestFunctional/parallel/ConfigCmd 0.36
90 TestFunctional/parallel/DashboardCmd 27.68
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 0.99
97 TestFunctional/parallel/ServiceCmdConnect 16.64
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 33.49
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.7
103 TestFunctional/parallel/MySQL 21.74
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.8
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
113 TestFunctional/parallel/License 0.64
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
117 TestFunctional/parallel/MountCmd/any-port 16.52
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.49
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 23.31
123 TestFunctional/parallel/MountCmd/specific-port 1.94
124 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 12.16
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.48
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
138 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
139 TestFunctional/parallel/ImageCommands/Setup 1.87
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.03
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
144 TestFunctional/parallel/ProfileCmd/profile_list 0.42
145 TestFunctional/parallel/ServiceCmd/List 1.71
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.5
154 TestFunctional/parallel/ServiceCmd/URL 0.5
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 101.29
163 TestMultiControlPlane/serial/DeployApp 5.86
164 TestMultiControlPlane/serial/PingHostFromPods 1
165 TestMultiControlPlane/serial/AddWorkerNode 33.74
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
168 TestMultiControlPlane/serial/CopyFile 15.76
169 TestMultiControlPlane/serial/StopSecondaryNode 12.46
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 44.61
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 161
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.37
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
176 TestMultiControlPlane/serial/StopCluster 35.56
177 TestMultiControlPlane/serial/RestartCluster 82.17
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
179 TestMultiControlPlane/serial/AddSecondaryNode 39.94
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
184 TestJSONOutput/start/Command 41.39
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.66
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.58
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.82
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
209 TestKicCustomNetwork/create_custom_network 35.82
210 TestKicCustomNetwork/use_default_bridge_network 22.65
211 TestKicExistingNetwork 25.94
212 TestKicCustomSubnet 23.43
213 TestKicStaticIP 26.66
214 TestMainNoArgs 0.04
215 TestMinikubeProfile 48.29
218 TestMountStart/serial/StartWithMountFirst 6.12
219 TestMountStart/serial/VerifyMountFirst 0.24
220 TestMountStart/serial/StartWithMountSecond 6.17
221 TestMountStart/serial/VerifyMountSecond 0.25
222 TestMountStart/serial/DeleteFirst 1.59
223 TestMountStart/serial/VerifyMountPostDelete 0.25
224 TestMountStart/serial/Stop 1.17
225 TestMountStart/serial/RestartStopped 7.81
226 TestMountStart/serial/VerifyMountPostStop 0.24
229 TestMultiNode/serial/FreshStart2Nodes 69.77
230 TestMultiNode/serial/DeployApp2Nodes 5.34
231 TestMultiNode/serial/PingHostFrom2Pods 0.71
232 TestMultiNode/serial/AddNode 29.76
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.63
235 TestMultiNode/serial/CopyFile 8.99
236 TestMultiNode/serial/StopNode 2.09
237 TestMultiNode/serial/StartAfterStop 8.99
238 TestMultiNode/serial/RestartKeepsNodes 89.5
239 TestMultiNode/serial/DeleteNode 4.96
240 TestMultiNode/serial/StopMultiNode 23.68
241 TestMultiNode/serial/RestartMultiNode 43.98
242 TestMultiNode/serial/ValidateNameConflict 25.04
247 TestPreload 113.15
249 TestScheduledStopUnix 96.55
252 TestInsufficientStorage 9.87
253 TestRunningBinaryUpgrade 79.78
255 TestKubernetesUpgrade 350.08
256 TestMissingContainerUpgrade 162.92
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
259 TestNoKubernetes/serial/StartWithK8s 30.03
260 TestNoKubernetes/serial/StartWithStopK8s 32.16
261 TestNoKubernetes/serial/Start 8.76
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
263 TestNoKubernetes/serial/ProfileList 5.2
264 TestNoKubernetes/serial/Stop 1.23
265 TestNoKubernetes/serial/StartNoArgs 7.44
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
267 TestStoppedBinaryUpgrade/Setup 2.46
268 TestStoppedBinaryUpgrade/Upgrade 67.05
276 TestNetworkPlugins/group/false 3.09
281 TestPause/serial/Start 42.45
282 TestPause/serial/SecondStartNoReconfiguration 24.82
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
291 TestNetworkPlugins/group/auto/Start 42.68
292 TestPause/serial/Pause 0.71
293 TestPause/serial/VerifyStatus 0.3
294 TestPause/serial/Unpause 0.66
295 TestPause/serial/PauseAgain 0.82
296 TestPause/serial/DeletePaused 2.79
297 TestPause/serial/VerifyDeletedResources 29.19
298 TestNetworkPlugins/group/kindnet/Start 42.89
299 TestNetworkPlugins/group/auto/KubeletFlags 0.28
300 TestNetworkPlugins/group/auto/NetCatPod 11.2
301 TestNetworkPlugins/group/calico/Start 56.3
302 TestNetworkPlugins/group/auto/DNS 0.18
303 TestNetworkPlugins/group/auto/Localhost 0.17
304 TestNetworkPlugins/group/auto/HairPin 0.18
305 TestNetworkPlugins/group/custom-flannel/Start 52.8
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
308 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
309 TestNetworkPlugins/group/kindnet/DNS 0.13
310 TestNetworkPlugins/group/kindnet/Localhost 0.11
311 TestNetworkPlugins/group/kindnet/HairPin 0.1
312 TestNetworkPlugins/group/calico/ControllerPod 6.01
313 TestNetworkPlugins/group/calico/KubeletFlags 0.32
314 TestNetworkPlugins/group/calico/NetCatPod 11.3
315 TestNetworkPlugins/group/enable-default-cni/Start 34.23
316 TestNetworkPlugins/group/calico/DNS 0.13
317 TestNetworkPlugins/group/calico/Localhost 0.11
318 TestNetworkPlugins/group/calico/HairPin 0.12
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
321 TestNetworkPlugins/group/custom-flannel/DNS 0.15
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
324 TestNetworkPlugins/group/flannel/Start 52.44
325 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
326 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.21
327 TestNetworkPlugins/group/bridge/Start 36.24
328 TestNetworkPlugins/group/enable-default-cni/DNS 21.11
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
332 TestNetworkPlugins/group/bridge/NetCatPod 10.22
333 TestNetworkPlugins/group/flannel/ControllerPod 6
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
335 TestNetworkPlugins/group/flannel/NetCatPod 9.23
338 TestNetworkPlugins/group/bridge/DNS 0.15
339 TestNetworkPlugins/group/bridge/Localhost 0.12
340 TestNetworkPlugins/group/bridge/HairPin 0.13
343 TestNetworkPlugins/group/flannel/DNS 0.15
344 TestNetworkPlugins/group/flannel/Localhost 0.12
345 TestNetworkPlugins/group/flannel/HairPin 0.14
354 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.81
355 TestStartStop/group/no-preload/serial/Stop 1.21
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
359 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.98
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
361 TestStartStop/group/embed-certs/serial/Stop 1.24
362 TestStartStop/group/old-k8s-version/serial/Stop 1.27
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.24
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
366 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
379 TestStartStop/group/newest-cni/serial/FirstStart 28.1
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
382 TestStartStop/group/newest-cni/serial/Stop 1.21
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
384 TestStartStop/group/newest-cni/serial/SecondStart 12.46
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
388 TestStartStop/group/newest-cni/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (34.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-927616 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-927616 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (34.784801412s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (34.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0401 19:45:49.220262   23163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0401 19:45:49.220353   23163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-927616
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-927616: exit status 85 (59.303321ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-927616 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |          |
	|         | -p download-only-927616        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:45:14
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:45:14.472614   23175 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:45:14.472858   23175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:14.472866   23175 out.go:358] Setting ErrFile to fd 2...
	I0401 19:45:14.472870   23175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:14.473025   23175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	W0401 19:45:14.473129   23175 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20506-16361/.minikube/config/config.json: open /home/jenkins/minikube-integration/20506-16361/.minikube/config/config.json: no such file or directory
	I0401 19:45:14.473661   23175 out.go:352] Setting JSON to true
	I0401 19:45:14.474542   23175 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1660,"bootTime":1743535054,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:45:14.474594   23175 start.go:139] virtualization: kvm guest
	I0401 19:45:14.477105   23175 out.go:97] [download-only-927616] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0401 19:45:14.477272   23175 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball: no such file or directory
	I0401 19:45:14.477321   23175 notify.go:220] Checking for updates...
	I0401 19:45:14.478532   23175 out.go:169] MINIKUBE_LOCATION=20506
	I0401 19:45:14.479923   23175 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:45:14.481144   23175 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:45:14.482255   23175 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 19:45:14.483246   23175 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0401 19:45:14.485298   23175 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 19:45:14.485538   23175 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:45:14.506864   23175 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 19:45:14.506924   23175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:45:14.899890   23175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-01 19:45:14.889983997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:45:14.900007   23175 docker.go:318] overlay module found
	I0401 19:45:14.901409   23175 out.go:97] Using the docker driver based on user configuration
	I0401 19:45:14.901459   23175 start.go:297] selected driver: docker
	I0401 19:45:14.901467   23175 start.go:901] validating driver "docker" against <nil>
	I0401 19:45:14.901544   23175 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:45:14.952112   23175 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-01 19:45:14.943511256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:45:14.952265   23175 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:45:14.952772   23175 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0401 19:45:14.952925   23175 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 19:45:14.954463   23175 out.go:169] Using Docker driver with root privileges
	I0401 19:45:14.955659   23175 cni.go:84] Creating CNI manager for ""
	I0401 19:45:14.955709   23175 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 19:45:14.955720   23175 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 19:45:14.955771   23175 start.go:340] cluster config:
	{Name:download-only-927616 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-927616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:45:14.957314   23175 out.go:97] Starting "download-only-927616" primary control-plane node in "download-only-927616" cluster
	I0401 19:45:14.957327   23175 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 19:45:14.958689   23175 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0401 19:45:14.958715   23175 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:45:14.958840   23175 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 19:45:14.974459   23175 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0401 19:45:14.974647   23175 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0401 19:45:14.974743   23175 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0401 19:45:15.116557   23175 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:15.116593   23175 cache.go:56] Caching tarball of preloaded images
	I0401 19:45:15.116832   23175 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:45:15.118532   23175 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0401 19:45:15.118554   23175 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:15.699467   23175 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:27.594694   23175 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:27.594785   23175 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:28.514484   23175 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0401 19:45:28.514822   23175 profile.go:143] Saving config to /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/download-only-927616/config.json ...
	I0401 19:45:28.514855   23175 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/download-only-927616/config.json: {Name:mkbbb2188df29c5a539d3bbd0f99646141f343c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0401 19:45:28.515051   23175 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0401 19:45:28.515293   23175 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-927616 host does not exist
	  To start a cluster, run: "minikube start -p download-only-927616"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-927616
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (14.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-822252 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-822252 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.18657648s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (14.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0401 19:46:04.006898   23163 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0401 19:46:04.006952   23163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-822252
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-822252: exit status 85 (58.537268ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-927616 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | -p download-only-927616        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| delete  | -p download-only-927616        | download-only-927616 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC | 01 Apr 25 19:45 UTC |
	| start   | -o=json --download-only        | download-only-822252 | jenkins | v1.35.0 | 01 Apr 25 19:45 UTC |                     |
	|         | -p download-only-822252        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/01 19:45:49
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0401 19:45:49.860923   23616 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:45:49.861174   23616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:49.861182   23616 out.go:358] Setting ErrFile to fd 2...
	I0401 19:45:49.861186   23616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:45:49.861362   23616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 19:45:49.861929   23616 out.go:352] Setting JSON to true
	I0401 19:45:49.862740   23616 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1696,"bootTime":1743535054,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:45:49.862832   23616 start.go:139] virtualization: kvm guest
	I0401 19:45:49.943131   23616 out.go:97] [download-only-822252] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:45:49.943382   23616 notify.go:220] Checking for updates...
	I0401 19:45:49.973548   23616 out.go:169] MINIKUBE_LOCATION=20506
	I0401 19:45:50.036430   23616 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:45:50.120156   23616 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:45:50.247749   23616 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 19:45:50.378411   23616 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0401 19:45:50.525763   23616 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0401 19:45:50.526098   23616 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:45:50.547232   23616 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 19:45:50.547308   23616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:45:50.600706   23616 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-01 19:45:50.591714418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:45:50.600796   23616 docker.go:318] overlay module found
	I0401 19:45:50.642982   23616 out.go:97] Using the docker driver based on user configuration
	I0401 19:45:50.643017   23616 start.go:297] selected driver: docker
	I0401 19:45:50.643023   23616 start.go:901] validating driver "docker" against <nil>
	I0401 19:45:50.643115   23616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:45:50.690293   23616 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-04-01 19:45:50.681211938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:45:50.690441   23616 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0401 19:45:50.690901   23616 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0401 19:45:50.691043   23616 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0401 19:45:50.734968   23616 out.go:169] Using Docker driver with root privileges
	I0401 19:45:50.786473   23616 cni.go:84] Creating CNI manager for ""
	I0401 19:45:50.786572   23616 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0401 19:45:50.786582   23616 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0401 19:45:50.786655   23616 start.go:340] cluster config:
	{Name:download-only-822252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-822252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:45:50.870883   23616 out.go:97] Starting "download-only-822252" primary control-plane node in "download-only-822252" cluster
	I0401 19:45:50.870937   23616 cache.go:121] Beginning downloading kic base image for docker with crio
	I0401 19:45:50.954878   23616 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0401 19:45:50.954916   23616 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:45:50.955034   23616 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0401 19:45:50.971498   23616 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0401 19:45:50.971619   23616 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0401 19:45:50.971644   23616 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0401 19:45:50.971651   23616 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0401 19:45:50.971658   23616 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0401 19:45:51.057881   23616 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	I0401 19:45:51.057907   23616 cache.go:56] Caching tarball of preloaded images
	I0401 19:45:51.058059   23616 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
	I0401 19:45:51.121259   23616 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0401 19:45:51.121300   23616 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 ...
	I0401 19:45:51.221375   23616 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:a1ce605168a895ad5f3b3c8db1fe4d66 -> /home/jenkins/minikube-integration/20506-16361/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-822252 host does not exist
	  To start a cluster, run: "minikube start -p download-only-822252"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-822252
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.07s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-954986 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-954986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-954986
--- PASS: TestDownloadOnlyKic (1.07s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0401 19:46:05.719330   23163 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-842965 --alsologtostderr --binary-mirror http://127.0.0.1:40093 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-842965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-842965
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (55.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-562627 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-562627 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (52.782314242s)
helpers_test.go:175: Cleaning up "offline-crio-562627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-562627
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-562627: (2.524757714s)
--- PASS: TestOffline (55.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649141
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-649141: exit status 85 (47.610248ms)

                                                
                                                
-- stdout --
	* Profile "addons-649141" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649141"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649141
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-649141: exit status 85 (49.623837ms)

                                                
                                                
-- stdout --
	* Profile "addons-649141" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-649141"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-649141 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-649141 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m19.001633914s)
--- PASS: TestAddons/Setup (139.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-649141 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-649141 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-649141 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-649141 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7323dddb-3b4d-411e-aa54-5e2b18940f3e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7323dddb-3b4d-411e-aa54-5e2b18940f3e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.002933549s
addons_test.go:633: (dbg) Run:  kubectl --context addons-649141 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-649141 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-649141 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.079527ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-f5t9p" [4e5173fa-fe3e-4c68-80ae-b807fa653edd] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00361302s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bpvpg" [f43456fc-f979-4521-9554-0daabc37e1a9] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003566469s
addons_test.go:331: (dbg) Run:  kubectl --context addons-649141 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-649141 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-649141 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.616232243s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 ip
2025/04/01 19:49:05 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bj7hr" [023bb927-0c30-4135-ab3c-c57f8b681a11] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002844248s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable inspektor-gadget --alsologtostderr -v=1: (5.698198662s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.101818ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-x9wfw" [95c30889-c302-41c9-b665-1e72c47e69a3] Running
I0401 19:48:46.665681   23163 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0401 19:48:46.665705   23163 kapi.go:107] duration metric: took 13.129085ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003511478s
addons_test.go:402: (dbg) Run:  kubectl --context addons-649141 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 13.139704ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-649141 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-649141 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3ba3a56b-e590-42e8-aeb0-3fdad6c79e5d] Pending
helpers_test.go:344: "task-pv-pod" [3ba3a56b-e590-42e8-aeb0-3fdad6c79e5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3ba3a56b-e590-42e8-aeb0-3fdad6c79e5d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00343235s
addons_test.go:511: (dbg) Run:  kubectl --context addons-649141 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649141 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-649141 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-649141 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-649141 delete pod task-pv-pod: (1.24101898s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-649141 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-649141 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-649141 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b008e31a-28fb-4441-8009-d74ff8742b26] Pending
helpers_test.go:344: "task-pv-pod-restore" [b008e31a-28fb-4441-8009-d74ff8742b26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b008e31a-28fb-4441-8009-d74ff8742b26] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005308839s
addons_test.go:553: (dbg) Run:  kubectl --context addons-649141 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-649141 delete pod task-pv-pod-restore: (1.643401583s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-649141 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-649141 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.531400056s)
--- PASS: TestAddons/parallel/CSI (56.37s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-649141 --alsologtostderr -v=1
I0401 19:48:46.652587   23163 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-k9lg8" [73940a71-bcae-4ab9-909d-16a83bcf6e74] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-k9lg8" [73940a71-bcae-4ab9-909d-16a83bcf6e74] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.031228973s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable headlamp --alsologtostderr -v=1: (5.755282239s)
--- PASS: TestAddons/parallel/Headlamp (18.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-4tzjq" [3e2ea287-cc70-43a3-8493-541f6d6277ac] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003219966s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-649141 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-649141 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4eabf4db-75eb-4efe-bb27-fb02e530dd7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4eabf4db-75eb-4efe-bb27-fb02e530dd7d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4eabf4db-75eb-4efe-bb27-fb02e530dd7d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.002729879s
addons_test.go:906: (dbg) Run:  kubectl --context addons-649141 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 ssh "cat /opt/local-path-provisioner/pvc-dcafb04a-54c9-48ba-b8f1-ef3390737a6d_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-649141 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-649141 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.347608804s)
--- PASS: TestAddons/parallel/LocalPath (57.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cfwld" [4e2876a7-2b87-487b-8684-2742384fe6c7] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003201148s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-tb4cc" [7a7e70ef-fa6e-434f-9f3d-797a64dde353] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003595018s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-649141 addons disable yakd --alsologtostderr -v=1: (5.858052894s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-t788h" [9045ad72-ef2d-4089-86df-edad2333b849] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.002871321s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.07s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-649141
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-649141: (11.830554743s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-649141
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-649141
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-649141
--- PASS: TestAddons/StoppedEnableDisable (12.07s)

                                                
                                    
x
+
TestCertOptions (27.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-433236 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-433236 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.662836414s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-433236 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-433236 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-433236 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-433236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-433236
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-433236: (4.072513158s)
--- PASS: TestCertOptions (27.39s)

                                                
                                    
x
+
TestCertExpiration (220.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884182 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884182 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.98658133s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-884182 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-884182 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.916356059s)
helpers_test.go:175: Cleaning up "cert-expiration-884182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-884182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-884182: (2.345746414s)
--- PASS: TestCertExpiration (220.25s)

                                                
                                    
x
+
TestForceSystemdFlag (33.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-842032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-842032 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.346119253s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-842032 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-842032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-842032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-842032: (4.386836654s)
--- PASS: TestForceSystemdFlag (33.14s)

                                                
                                    
x
+
TestForceSystemdEnv (35.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-609053 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-609053 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.20753599s)
helpers_test.go:175: Cleaning up "force-systemd-env-609053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-609053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-609053: (2.862406659s)
--- PASS: TestForceSystemdEnv (35.07s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0401 20:21:14.116758   23163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:21:14.116940   23163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0401 20:21:14.145935   23163 install.go:62] docker-machine-driver-kvm2: exit status 1
W0401 20:21:14.146051   23163 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0401 20:21:14.146099   23163 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2311865202/001/docker-machine-driver-kvm2
I0401 20:21:14.437035   23163 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2311865202/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000591838 gz:0xc0005918e0 tar:0xc000591870 tar.bz2:0xc000591880 tar.gz:0xc0005918a0 tar.xz:0xc0005918c0 tar.zst:0xc0005918d0 tbz2:0xc000591880 tgz:0xc0005918a0 txz:0xc0005918c0 tzst:0xc0005918d0 xz:0xc0005918e8 zip:0xc0005918f0 zst:0xc000591900] Getters:map[file:0xc001b8acb0 http:0xc00069f1d0 https:0xc00069f220] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0401 20:21:14.437077   23163 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2311865202/001/docker-machine-driver-kvm2
I0401 20:21:17.082546   23163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0401 20:21:17.082627   23163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0401 20:21:17.109573   23163 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0401 20:21:17.109611   23163 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0401 20:21:17.109685   23163 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0401 20:21:17.109723   23163 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2311865202/002/docker-machine-driver-kvm2
I0401 20:21:17.168477   23163 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2311865202/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc000591838 gz:0xc0005918e0 tar:0xc000591870 tar.bz2:0xc000591880 tar.gz:0xc0005918a0 tar.xz:0xc0005918c0 tar.zst:0xc0005918d0 tbz2:0xc000591880 tgz:0xc0005918a0 txz:0xc0005918c0 tzst:0xc0005918d0 xz:0xc0005918e8 zip:0xc0005918f0 zst:0xc000591900] Getters:map[file:0xc0006913f0 http:0xc000896fa0 https:0xc000896ff0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0401 20:21:17.168529   23163 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2311865202/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                    
x
+
TestErrorSpam/setup (22.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-514384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514384 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-514384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-514384 --driver=docker  --container-runtime=crio: (22.622071368s)
--- PASS: TestErrorSpam/setup (22.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 unpause
--- PASS: TestErrorSpam/unpause (1.56s)

                                                
                                    
x
+
TestErrorSpam/stop (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 stop: (1.165607192s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-514384 --log_dir /tmp/nospam-514384 stop
--- PASS: TestErrorSpam/stop (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20506-16361/.minikube/files/etc/test/nested/copy/23163/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-432066 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (42.208723424s)
--- PASS: TestFunctional/serial/StartWithProxy (42.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0401 19:53:24.390899   23163 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --alsologtostderr -v=8
E0401 19:53:26.124114   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.130442   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.141798   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.163162   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.204560   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.286014   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.447548   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:26.769266   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:27.411333   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:28.692950   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:31.254741   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:36.376088   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:53:46.618100   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-432066 --alsologtostderr -v=8: (35.457962793s)
functional_test.go:680: soft start took 35.458617247s for "functional-432066" cluster.
I0401 19:53:59.850220   23163 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (35.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-432066 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:3.1: (1.030694345s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:3.3: (1.063208065s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 cache add registry.k8s.io/pause:latest: (1.048084221s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-432066 /tmp/TestFunctionalserialCacheCmdcacheadd_local998748251/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache add minikube-local-cache-test:functional-432066
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 cache add minikube-local-cache-test:functional-432066: (1.720606072s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache delete minikube-local-cache-test:functional-432066
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-432066
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (278.380977ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0401 19:54:07.099473   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 kubectl -- --context functional-432066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-432066 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-432066 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.0313412s)
functional_test.go:778: restart took 39.031465261s for "functional-432066" cluster.
I0401 19:54:46.506604   23163 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (39.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-432066 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 logs: (1.322371195s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 logs --file /tmp/TestFunctionalserialLogsFileCmd2943120858/001/logs.txt
E0401 19:54:48.060956   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 logs --file /tmp/TestFunctionalserialLogsFileCmd2943120858/001/logs.txt: (1.337083737s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-432066 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-432066
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-432066: exit status 115 (314.319469ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30450 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-432066 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 config get cpus: exit status 14 (65.262304ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 config get cpus: exit status 14 (53.94752ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-432066 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-432066 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 60793: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-432066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (149.159509ms)

                                                
                                                
-- stdout --
	* [functional-432066] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:54:55.279459   59859 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:54:55.279546   59859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:54:55.279550   59859 out.go:358] Setting ErrFile to fd 2...
	I0401 19:54:55.279554   59859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:54:55.279771   59859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 19:54:55.280285   59859 out.go:352] Setting JSON to false
	I0401 19:54:55.281130   59859 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2241,"bootTime":1743535054,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:54:55.281187   59859 start.go:139] virtualization: kvm guest
	I0401 19:54:55.282813   59859 out.go:177] * [functional-432066] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 19:54:55.283939   59859 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:54:55.283944   59859 notify.go:220] Checking for updates...
	I0401 19:54:55.286275   59859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:54:55.287459   59859 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:54:55.288624   59859 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 19:54:55.289695   59859 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:54:55.290875   59859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:54:55.292563   59859 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:54:55.293182   59859 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:54:55.316439   59859 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 19:54:55.316529   59859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:54:55.362814   59859 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-01 19:54:55.354597735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:54:55.362921   59859 docker.go:318] overlay module found
	I0401 19:54:55.364524   59859 out.go:177] * Using the docker driver based on existing profile
	I0401 19:54:55.365565   59859 start.go:297] selected driver: docker
	I0401 19:54:55.365578   59859 start.go:901] validating driver "docker" against &{Name:functional-432066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-432066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:54:55.365678   59859 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:54:55.367985   59859 out.go:201] 
	W0401 19:54:55.369190   59859 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0401 19:54:55.370347   59859 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-432066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-432066 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.636033ms)

                                                
                                                
-- stdout --
	* [functional-432066] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:54:55.089626   59653 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:54:55.089816   59653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:54:55.089843   59653 out.go:358] Setting ErrFile to fd 2...
	I0401 19:54:55.089870   59653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:54:55.090197   59653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 19:54:55.090769   59653 out.go:352] Setting JSON to false
	I0401 19:54:55.091792   59653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2241,"bootTime":1743535054,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 19:54:55.091880   59653 start.go:139] virtualization: kvm guest
	I0401 19:54:55.093737   59653 out.go:177] * [functional-432066] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0401 19:54:55.095452   59653 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 19:54:55.095463   59653 notify.go:220] Checking for updates...
	I0401 19:54:55.100888   59653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 19:54:55.103509   59653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 19:54:55.104767   59653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 19:54:55.105940   59653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 19:54:55.107156   59653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 19:54:55.109032   59653 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:54:55.109739   59653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 19:54:55.138648   59653 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 19:54:55.138788   59653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:54:55.214218   59653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-01 19:54:55.204473907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:54:55.214324   59653 docker.go:318] overlay module found
	I0401 19:54:55.216100   59653 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0401 19:54:55.217249   59653 start.go:297] selected driver: docker
	I0401 19:54:55.217263   59653 start.go:901] validating driver "docker" against &{Name:functional-432066 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-432066 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0401 19:54:55.217374   59653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 19:54:55.219587   59653 out.go:201] 
	W0401 19:54:55.220643   59653 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0401 19:54:55.221836   59653 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (16.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-432066 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-432066 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-rgjs2" [c73de422-66f4-491d-8554-b22bcadbe746] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-rgjs2" [c73de422-66f4-491d-8554-b22bcadbe746] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 16.003697889s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:30917
functional_test.go:1692: http://192.168.49.2:30917: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-rgjs2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30917
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (16.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6e7b6579-a36b-4676-a669-b65185c792fb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003012792s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-432066 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-432066 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-432066 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-432066 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6680b8d0-b1ed-4b74-82ce-1a54fe4c4b8d] Pending
helpers_test.go:344: "sp-pod" [6680b8d0-b1ed-4b74-82ce-1a54fe4c4b8d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/04/01 19:55:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [6680b8d0-b1ed-4b74-82ce-1a54fe4c4b8d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003440727s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-432066 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-432066 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-432066 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [88d64108-ba4b-4322-b817-b53806e6104c] Pending
helpers_test.go:344: "sp-pod" [88d64108-ba4b-4322-b817-b53806e6104c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [88d64108-ba4b-4322-b817-b53806e6104c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003301275s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-432066 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh -n functional-432066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cp functional-432066:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1313977715/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh -n functional-432066 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh -n functional-432066 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-432066 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-dvnsf" [995c2469-7c60-4f0f-8b0a-06939d19e1c0] Pending
helpers_test.go:344: "mysql-58ccfd96bb-dvnsf" [995c2469-7c60-4f0f-8b0a-06939d19e1c0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-dvnsf" [995c2469-7c60-4f0f-8b0a-06939d19e1c0] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003550594s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;": exit status 1 (114.942982ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 19:55:10.370281   23163 retry.go:31] will retry after 827.201855ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;": exit status 1 (234.874332ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 19:55:11.432641   23163 retry.go:31] will retry after 1.644478221s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;": exit status 1 (101.218337ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 19:55:13.179489   23163 retry.go:31] will retry after 1.483670091s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-432066 exec mysql-58ccfd96bb-dvnsf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/23163/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /etc/test/nested/copy/23163/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/23163.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /etc/ssl/certs/23163.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/23163.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /usr/share/ca-certificates/23163.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/231632.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /etc/ssl/certs/231632.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/231632.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /usr/share/ca-certificates/231632.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-432066 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "sudo systemctl is-active docker": exit status 1 (260.535394ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "sudo systemctl is-active containerd": exit status 1 (258.226038ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (16.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdany-port1137926016/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1743537293947418746" to /tmp/TestFunctionalparallelMountCmdany-port1137926016/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1743537293947418746" to /tmp/TestFunctionalparallelMountCmdany-port1137926016/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1743537293947418746" to /tmp/TestFunctionalparallelMountCmdany-port1137926016/001/test-1743537293947418746
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (292.035108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:54:54.239774   23163 retry.go:31] will retry after 264.400984ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  1 19:54 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  1 19:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  1 19:54 test-1743537293947418746
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh cat /mount-9p/test-1743537293947418746
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-432066 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c29a8ab2-2079-41e1-b159-6cc2e3e5a700] Pending
helpers_test.go:344: "busybox-mount" [c29a8ab2-2079-41e1-b159-6cc2e3e5a700] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c29a8ab2-2079-41e1-b159-6cc2e3e5a700] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c29a8ab2-2079-41e1-b159-6cc2e3e5a700] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 14.002904668s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-432066 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdany-port1137926016/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (16.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 60630: os: process already finished
helpers_test.go:508: unable to kill pid 60423: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-432066 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f41d2889-85dd-4295-ab28-7dc81839608a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f41d2889-85dd-4295-ab28-7dc81839608a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 23.003191943s
I0401 19:55:19.754136   23163 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (23.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdspecific-port405973931/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (357.897032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:55:10.828620   23163 retry.go:31] will retry after 456.704301ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdspecific-port405973931/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "sudo umount -f /mount-9p": exit status 1 (260.997888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-432066 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdspecific-port405973931/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T" /mount1: exit status 1 (302.825095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0401 19:55:12.710768   23163 retry.go:31] will retry after 481.334071ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-432066 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-432066 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3160611126/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-432066 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.208.93 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-432066 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-432066 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-432066 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-dj7hw" [a19b0b72-185b-4cd8-8895-05511ad9432e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-dj7hw" [a19b0b72-185b-4cd8-8895-05511ad9432e] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003610904s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432066 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-432066
localhost/kicbase/echo-server:functional-432066
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432066 image ls --format short --alsologtostderr:
I0401 19:55:35.062837   66066 out.go:345] Setting OutFile to fd 1 ...
I0401 19:55:35.062936   66066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.062947   66066 out.go:358] Setting ErrFile to fd 2...
I0401 19:55:35.062953   66066 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.063133   66066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
I0401 19:55:35.063643   66066 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.063729   66066 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.064055   66066 cli_runner.go:164] Run: docker container inspect functional-432066 --format={{.State.Status}}
I0401 19:55:35.083288   66066 ssh_runner.go:195] Run: systemctl --version
I0401 19:55:35.083353   66066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432066
I0401 19:55:35.101702   66066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/functional-432066/id_rsa Username:docker}
I0401 19:55:35.194319   66066 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432066 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20250214-acbabc1a | df3849d954c98 | 95.7MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy              | v1.32.2            | f1332858868e1 | 95.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-432066  | dfbbf8388135d | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.32.2            | b6a454c5a800d | 90.8MB |
| registry.k8s.io/kube-scheduler          | v1.32.2            | d8e673e7c9983 | 70.7MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| docker.io/kindest/kindnetd              | v20241212-9f82dd49 | d300845f67aeb | 95.7MB |
| docker.io/library/nginx                 | alpine             | 1ff4bb4faebcf | 49.3MB |
| localhost/kicbase/echo-server           | functional-432066  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.32.2            | 85b7a174738ba | 98.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | latest             | 53a18edff8091 | 196MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432066 image ls --format table --alsologtostderr:
I0401 19:55:35.280037   66166 out.go:345] Setting OutFile to fd 1 ...
I0401 19:55:35.280526   66166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.280551   66166 out.go:358] Setting ErrFile to fd 2...
I0401 19:55:35.280560   66166 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.281005   66166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
I0401 19:55:35.281889   66166 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.281981   66166 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.282355   66166 cli_runner.go:164] Run: docker container inspect functional-432066 --format={{.State.Status}}
I0401 19:55:35.299594   66166 ssh_runner.go:195] Run: systemctl --version
I0401 19:55:35.299637   66166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432066
I0401 19:55:35.319331   66166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/functional-432066/id_rsa Username:docker}
I0401 19:55:35.413947   66166 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432066 image ls --format json --alsologtostderr:
[{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"90793286"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76","registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"70653254"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{
"id":"d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26","docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"95714353"},{"id":"df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495","docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"95703604"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c7
1b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":["docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591","docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc"],"repoTags":["docker.io/library/nginx:alpine"],"size":"49323988"},{"id":"53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":["docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19","docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed05
0f01506bb4"],"repoTags":["docker.io/library/nginx:latest"],"size":"196159380"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"
id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"98055648"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d","registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d"],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.32.2"],"size":"95271321"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"dfbbf8388135d1ad70829922093c03a931a8da01c3142833242ca611570b82f7","repoDigests":["localhost/minikube-local-cache-test@sha256:fbb025233a9e7a6a982c3cda66b202942e6f1d6ebcfeed551025a7a5150c9768"],"repoTags":["localhost/minikube-local-cache-test:functional-432066"],"size":"3330"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
"gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-432066"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6
e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432066 image ls --format json --alsologtostderr:
I0401 19:55:35.279870   66165 out.go:345] Setting OutFile to fd 1 ...
I0401 19:55:35.280093   66165 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.280108   66165 out.go:358] Setting ErrFile to fd 2...
I0401 19:55:35.280117   66165 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.280373   66165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
I0401 19:55:35.281084   66165 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.281205   66165 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.281632   66165 cli_runner.go:164] Run: docker container inspect functional-432066 --format={{.State.Status}}
I0401 19:55:35.300061   66165 ssh_runner.go:195] Run: systemctl --version
I0401 19:55:35.300108   66165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432066
I0401 19:55:35.318824   66165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/functional-432066/id_rsa Username:docker}
I0401 19:55:35.410115   66165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432066 image ls --format yaml --alsologtostderr:
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:01669d976f198e210414e4864454330f6cbd4e5fedf1570b0340d206442f2ae5
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "90793286"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
- registry.k8s.io/kube-scheduler@sha256:c98f93221ffa10bfb46b85966915759dbcaf957098364763242e814fee84363b
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "70653254"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests:
- docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591
- docker.io/library/nginx@sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc
repoTags:
- docker.io/library/nginx:alpine
size: "49323988"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests:
- docker.io/library/nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
- docker.io/library/nginx@sha256:54809b2f36d0ff38e8e5362b0239779e4b75c2f19ad70ef047ed050f01506bb4
repoTags:
- docker.io/library/nginx:latest
size: "196159380"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
- registry.k8s.io/kube-proxy@sha256:ab90de2ec2cbade95df799a63d85e438f51817055ecee067b694fdd0f776e15d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "95271321"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-432066
size: "4943877"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
- docker.io/kindest/kindnetd@sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "95703604"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
- docker.io/kindest/kindnetd@sha256:a3c74735c5fc7cab683a2f94dddec913052aacaa8d8b773c88d428e8dee3dd40
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "95714353"
- id: dfbbf8388135d1ad70829922093c03a931a8da01c3142833242ca611570b82f7
repoDigests:
- localhost/minikube-local-cache-test@sha256:fbb025233a9e7a6a982c3cda66b202942e6f1d6ebcfeed551025a7a5150c9768
repoTags:
- localhost/minikube-local-cache-test:functional-432066
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:48e677803a23233a10a796f3d7edc73223e3fbaceb6113665c1015464a743e9d
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "98055648"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432066 image ls --format yaml --alsologtostderr:
I0401 19:55:35.062401   66067 out.go:345] Setting OutFile to fd 1 ...
I0401 19:55:35.062681   66067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.062692   66067 out.go:358] Setting ErrFile to fd 2...
I0401 19:55:35.062699   66067 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.062979   66067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
I0401 19:55:35.063563   66067 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.063700   66067 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.064081   66067 cli_runner.go:164] Run: docker container inspect functional-432066 --format={{.State.Status}}
I0401 19:55:35.083774   66067 ssh_runner.go:195] Run: systemctl --version
I0401 19:55:35.083816   66067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432066
I0401 19:55:35.101663   66067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/functional-432066/id_rsa Username:docker}
I0401 19:55:35.194369   66067 ssh_runner.go:195] Run: sudo crictl images --output json
W0401 19:55:35.230232   66067 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 56c647ac-c53d-427d-9a58-4573493f8c23
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-432066 ssh pgrep buildkitd: exit status 1 (249.975449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image build -t localhost/my-image:functional-432066 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 image build -t localhost/my-image:functional-432066 testdata/build --alsologtostderr: (3.496104398s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-432066 image build -t localhost/my-image:functional-432066 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f8c5ba2456a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-432066
--> 53cad5d5ed7
Successfully tagged localhost/my-image:functional-432066
53cad5d5ed7b613fcfcb13c13be020e3e0a7701d04f3786245951283af3b09ab
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-432066 image build -t localhost/my-image:functional-432066 testdata/build --alsologtostderr:
I0401 19:55:35.747178   66439 out.go:345] Setting OutFile to fd 1 ...
I0401 19:55:35.747461   66439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.747471   66439 out.go:358] Setting ErrFile to fd 2...
I0401 19:55:35.747475   66439 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0401 19:55:35.747652   66439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
I0401 19:55:35.748278   66439 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.748931   66439 config.go:182] Loaded profile config "functional-432066": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0401 19:55:35.749500   66439 cli_runner.go:164] Run: docker container inspect functional-432066 --format={{.State.Status}}
I0401 19:55:35.769078   66439 ssh_runner.go:195] Run: systemctl --version
I0401 19:55:35.769141   66439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-432066
I0401 19:55:35.786856   66439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/functional-432066/id_rsa Username:docker}
I0401 19:55:35.878026   66439 build_images.go:161] Building image from path: /tmp/build.2998145361.tar
I0401 19:55:35.878119   66439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0401 19:55:35.886235   66439 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2998145361.tar
I0401 19:55:35.889084   66439 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2998145361.tar: stat -c "%s %y" /var/lib/minikube/build/build.2998145361.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2998145361.tar': No such file or directory
I0401 19:55:35.889119   66439 ssh_runner.go:362] scp /tmp/build.2998145361.tar --> /var/lib/minikube/build/build.2998145361.tar (3072 bytes)
I0401 19:55:35.910373   66439 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2998145361
I0401 19:55:35.918219   66439 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2998145361 -xf /var/lib/minikube/build/build.2998145361.tar
I0401 19:55:35.925963   66439 crio.go:315] Building image: /var/lib/minikube/build/build.2998145361
I0401 19:55:35.926006   66439 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-432066 /var/lib/minikube/build/build.2998145361 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0401 19:55:39.175387   66439 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-432066 /var/lib/minikube/build/build.2998145361 --cgroup-manager=cgroupfs: (3.249361865s)
I0401 19:55:39.175437   66439 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2998145361
I0401 19:55:39.183621   66439 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2998145361.tar
I0401 19:55:39.191353   66439 build_images.go:217] Built localhost/my-image:functional-432066 from /tmp/build.2998145361.tar
I0401 19:55:39.191385   66439 build_images.go:133] succeeded building to: functional-432066
I0401 19:55:39.191392   66439 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.847994577s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-432066
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image load --daemon kicbase/echo-server:functional-432066 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 image load --daemon kicbase/echo-server:functional-432066 --alsologtostderr: (1.15130978s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image load --daemon kicbase/echo-server:functional-432066 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 image load --daemon kicbase/echo-server:functional-432066 --alsologtostderr: (2.818846677s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-432066
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image load --daemon kicbase/echo-server:functional-432066 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "350.606025ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "66.545852ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service list
functional_test.go:1476: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 service list: (1.707294462s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "362.502779ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "127.609696ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image save kicbase/echo-server:functional-432066 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image rm kicbase/echo-server:functional-432066 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-linux-amd64 -p functional-432066 service list -o json: (1.682894842s)
functional_test.go:1511: Took "1.68298699s" to run "out/minikube-linux-amd64 -p functional-432066 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-432066
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 image save --daemon kicbase/echo-server:functional-432066 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-432066
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31238
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-432066 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31238
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-432066
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-432066
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-432066
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-793101 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0401 19:56:09.985337   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-793101 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.609820329s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (101.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-793101 -- rollout status deployment/busybox: (4.043387944s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-6sr58 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-fjrn8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-wlvkt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-6sr58 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-fjrn8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-wlvkt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-6sr58 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-fjrn8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-wlvkt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-6sr58 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-6sr58 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-fjrn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-fjrn8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-wlvkt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-793101 -- exec busybox-58667487b6-wlvkt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-793101 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-793101 -v=7 --alsologtostderr: (32.905946624s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-793101 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp testdata/cp-test.txt ha-793101:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1904203813/001/cp-test_ha-793101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101:/home/docker/cp-test.txt ha-793101-m02:/home/docker/cp-test_ha-793101_ha-793101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test_ha-793101_ha-793101-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101:/home/docker/cp-test.txt ha-793101-m03:/home/docker/cp-test_ha-793101_ha-793101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test_ha-793101_ha-793101-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101:/home/docker/cp-test.txt ha-793101-m04:/home/docker/cp-test_ha-793101_ha-793101-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test_ha-793101_ha-793101-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp testdata/cp-test.txt ha-793101-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1904203813/001/cp-test_ha-793101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m02:/home/docker/cp-test.txt ha-793101:/home/docker/cp-test_ha-793101-m02_ha-793101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test_ha-793101-m02_ha-793101.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m02:/home/docker/cp-test.txt ha-793101-m03:/home/docker/cp-test_ha-793101-m02_ha-793101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test_ha-793101-m02_ha-793101-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m02:/home/docker/cp-test.txt ha-793101-m04:/home/docker/cp-test_ha-793101-m02_ha-793101-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test_ha-793101-m02_ha-793101-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp testdata/cp-test.txt ha-793101-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1904203813/001/cp-test_ha-793101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m03:/home/docker/cp-test.txt ha-793101:/home/docker/cp-test_ha-793101-m03_ha-793101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test_ha-793101-m03_ha-793101.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m03:/home/docker/cp-test.txt ha-793101-m02:/home/docker/cp-test_ha-793101-m03_ha-793101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test_ha-793101-m03_ha-793101-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m03:/home/docker/cp-test.txt ha-793101-m04:/home/docker/cp-test_ha-793101-m03_ha-793101-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test_ha-793101-m03_ha-793101-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp testdata/cp-test.txt ha-793101-m04:/home/docker/cp-test.txt
E0401 19:58:26.124925   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1904203813/001/cp-test_ha-793101-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m04:/home/docker/cp-test.txt ha-793101:/home/docker/cp-test_ha-793101-m04_ha-793101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101 "sudo cat /home/docker/cp-test_ha-793101-m04_ha-793101.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m04:/home/docker/cp-test.txt ha-793101-m02:/home/docker/cp-test_ha-793101-m04_ha-793101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m02 "sudo cat /home/docker/cp-test_ha-793101-m04_ha-793101-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 cp ha-793101-m04:/home/docker/cp-test.txt ha-793101-m03:/home/docker/cp-test_ha-793101-m04_ha-793101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 ssh -n ha-793101-m03 "sudo cat /home/docker/cp-test_ha-793101-m04_ha-793101-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-793101 node stop m02 -v=7 --alsologtostderr: (11.807256554s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr: exit status 7 (651.638171ms)

                                                
                                                
-- stdout --
	ha-793101
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-793101-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-793101-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-793101-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 19:58:41.479891   87671 out.go:345] Setting OutFile to fd 1 ...
	I0401 19:58:41.480182   87671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:58:41.480193   87671 out.go:358] Setting ErrFile to fd 2...
	I0401 19:58:41.480197   87671 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 19:58:41.480398   87671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 19:58:41.480564   87671 out.go:352] Setting JSON to false
	I0401 19:58:41.480593   87671 mustload.go:65] Loading cluster: ha-793101
	I0401 19:58:41.480708   87671 notify.go:220] Checking for updates...
	I0401 19:58:41.480981   87671 config.go:182] Loaded profile config "ha-793101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 19:58:41.481001   87671 status.go:174] checking status of ha-793101 ...
	I0401 19:58:41.481399   87671 cli_runner.go:164] Run: docker container inspect ha-793101 --format={{.State.Status}}
	I0401 19:58:41.499036   87671 status.go:371] ha-793101 host status = "Running" (err=<nil>)
	I0401 19:58:41.499062   87671 host.go:66] Checking if "ha-793101" exists ...
	I0401 19:58:41.499338   87671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-793101
	I0401 19:58:41.519431   87671 host.go:66] Checking if "ha-793101" exists ...
	I0401 19:58:41.519723   87671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 19:58:41.519773   87671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-793101
	I0401 19:58:41.537283   87671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/ha-793101/id_rsa Username:docker}
	I0401 19:58:41.630711   87671 ssh_runner.go:195] Run: systemctl --version
	I0401 19:58:41.634543   87671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:58:41.645407   87671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 19:58:41.693446   87671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-01 19:58:41.684550171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 19:58:41.694006   87671 kubeconfig.go:125] found "ha-793101" server: "https://192.168.49.254:8443"
	I0401 19:58:41.694036   87671 api_server.go:166] Checking apiserver status ...
	I0401 19:58:41.694067   87671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:58:41.704495   87671 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	I0401 19:58:41.713292   87671 api_server.go:182] apiserver freezer: "5:freezer:/docker/b84684a7e0d869fcff42bbc73400c757b13ec0e55f5f6133a62928426d820d08/crio/crio-aee684b2e9623c6f804672f6cbe7c0581af437872ea6a7574b6fee6de4792351"
	I0401 19:58:41.713351   87671 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b84684a7e0d869fcff42bbc73400c757b13ec0e55f5f6133a62928426d820d08/crio/crio-aee684b2e9623c6f804672f6cbe7c0581af437872ea6a7574b6fee6de4792351/freezer.state
	I0401 19:58:41.720996   87671 api_server.go:204] freezer state: "THAWED"
	I0401 19:58:41.721026   87671 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0401 19:58:41.725395   87671 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0401 19:58:41.725420   87671 status.go:463] ha-793101 apiserver status = Running (err=<nil>)
	I0401 19:58:41.725433   87671 status.go:176] ha-793101 status: &{Name:ha-793101 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 19:58:41.725453   87671 status.go:174] checking status of ha-793101-m02 ...
	I0401 19:58:41.725788   87671 cli_runner.go:164] Run: docker container inspect ha-793101-m02 --format={{.State.Status}}
	I0401 19:58:41.743145   87671 status.go:371] ha-793101-m02 host status = "Stopped" (err=<nil>)
	I0401 19:58:41.743171   87671 status.go:384] host is not running, skipping remaining checks
	I0401 19:58:41.743178   87671 status.go:176] ha-793101-m02 status: &{Name:ha-793101-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 19:58:41.743203   87671 status.go:174] checking status of ha-793101-m03 ...
	I0401 19:58:41.743450   87671 cli_runner.go:164] Run: docker container inspect ha-793101-m03 --format={{.State.Status}}
	I0401 19:58:41.760523   87671 status.go:371] ha-793101-m03 host status = "Running" (err=<nil>)
	I0401 19:58:41.760546   87671 host.go:66] Checking if "ha-793101-m03" exists ...
	I0401 19:58:41.760862   87671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-793101-m03
	I0401 19:58:41.777392   87671 host.go:66] Checking if "ha-793101-m03" exists ...
	I0401 19:58:41.777698   87671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 19:58:41.777743   87671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-793101-m03
	I0401 19:58:41.794578   87671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/ha-793101-m03/id_rsa Username:docker}
	I0401 19:58:41.886593   87671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:58:41.896988   87671 kubeconfig.go:125] found "ha-793101" server: "https://192.168.49.254:8443"
	I0401 19:58:41.897012   87671 api_server.go:166] Checking apiserver status ...
	I0401 19:58:41.897047   87671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 19:58:41.906735   87671 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1412/cgroup
	I0401 19:58:41.915512   87671 api_server.go:182] apiserver freezer: "5:freezer:/docker/8f73ffab19ffe4f0110733f8b1ac9030e798a55bf66cc3d3b3594184a9d37f7a/crio/crio-d287c4209dbd4088f17eeb5f508906a818d145aa6170c2cc83ca64c9f6e9dadd"
	I0401 19:58:41.915576   87671 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8f73ffab19ffe4f0110733f8b1ac9030e798a55bf66cc3d3b3594184a9d37f7a/crio/crio-d287c4209dbd4088f17eeb5f508906a818d145aa6170c2cc83ca64c9f6e9dadd/freezer.state
	I0401 19:58:41.923193   87671 api_server.go:204] freezer state: "THAWED"
	I0401 19:58:41.923230   87671 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0401 19:58:41.926930   87671 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0401 19:58:41.926951   87671 status.go:463] ha-793101-m03 apiserver status = Running (err=<nil>)
	I0401 19:58:41.926958   87671 status.go:176] ha-793101-m03 status: &{Name:ha-793101-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 19:58:41.926986   87671 status.go:174] checking status of ha-793101-m04 ...
	I0401 19:58:41.927204   87671 cli_runner.go:164] Run: docker container inspect ha-793101-m04 --format={{.State.Status}}
	I0401 19:58:41.945583   87671 status.go:371] ha-793101-m04 host status = "Running" (err=<nil>)
	I0401 19:58:41.945610   87671 host.go:66] Checking if "ha-793101-m04" exists ...
	I0401 19:58:41.945896   87671 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-793101-m04
	I0401 19:58:41.963833   87671 host.go:66] Checking if "ha-793101-m04" exists ...
	I0401 19:58:41.964061   87671 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 19:58:41.964091   87671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-793101-m04
	I0401 19:58:41.982263   87671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/ha-793101-m04/id_rsa Username:docker}
	I0401 19:58:42.074519   87671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 19:58:42.084910   87671 status.go:176] ha-793101-m04 status: &{Name:ha-793101-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 node start m02 -v=7 --alsologtostderr
E0401 19:58:53.827622   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-793101 node start m02 -v=7 --alsologtostderr: (43.723150747s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (161s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-793101 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-793101 -v=7 --alsologtostderr
E0401 19:59:53.252394   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.258770   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.270162   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.291589   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.332983   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.414411   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.575921   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:53.897680   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:54.539751   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:55.821417   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 19:59:58.384404   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:00:03.505905   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-793101 -v=7 --alsologtostderr: (36.66723276s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-793101 --wait=true -v=7 --alsologtostderr
E0401 20:00:13.748225   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:00:34.230309   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:01:15.191866   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-793101 --wait=true -v=7 --alsologtostderr: (2m4.236462554s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-793101
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (161.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-793101 node delete m03 -v=7 --alsologtostderr: (10.594164946s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 stop -v=7 --alsologtostderr
E0401 20:02:37.113409   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-793101 stop -v=7 --alsologtostderr: (35.455050588s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr: exit status 7 (105.417168ms)

                                                
                                                
-- stdout --
	ha-793101
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-793101-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-793101-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:02:56.750201  105141 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:02:56.750322  105141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:02:56.750331  105141 out.go:358] Setting ErrFile to fd 2...
	I0401 20:02:56.750335  105141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:02:56.750570  105141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:02:56.750769  105141 out.go:352] Setting JSON to false
	I0401 20:02:56.750796  105141 mustload.go:65] Loading cluster: ha-793101
	I0401 20:02:56.750914  105141 notify.go:220] Checking for updates...
	I0401 20:02:56.751181  105141 config.go:182] Loaded profile config "ha-793101": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:02:56.751206  105141 status.go:174] checking status of ha-793101 ...
	I0401 20:02:56.751617  105141 cli_runner.go:164] Run: docker container inspect ha-793101 --format={{.State.Status}}
	I0401 20:02:56.770118  105141 status.go:371] ha-793101 host status = "Stopped" (err=<nil>)
	I0401 20:02:56.770139  105141 status.go:384] host is not running, skipping remaining checks
	I0401 20:02:56.770146  105141 status.go:176] ha-793101 status: &{Name:ha-793101 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:02:56.770188  105141 status.go:174] checking status of ha-793101-m02 ...
	I0401 20:02:56.770433  105141 cli_runner.go:164] Run: docker container inspect ha-793101-m02 --format={{.State.Status}}
	I0401 20:02:56.787392  105141 status.go:371] ha-793101-m02 host status = "Stopped" (err=<nil>)
	I0401 20:02:56.787423  105141 status.go:384] host is not running, skipping remaining checks
	I0401 20:02:56.787429  105141 status.go:176] ha-793101-m02 status: &{Name:ha-793101-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:02:56.787445  105141 status.go:174] checking status of ha-793101-m04 ...
	I0401 20:02:56.787690  105141 cli_runner.go:164] Run: docker container inspect ha-793101-m04 --format={{.State.Status}}
	I0401 20:02:56.804220  105141 status.go:371] ha-793101-m04 host status = "Stopped" (err=<nil>)
	I0401 20:02:56.804256  105141 status.go:384] host is not running, skipping remaining checks
	I0401 20:02:56.804262  105141 status.go:176] ha-793101-m04 status: &{Name:ha-793101-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-793101 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0401 20:03:26.124829   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-793101 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.412265368s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-793101 --control-plane -v=7 --alsologtostderr
E0401 20:04:53.252315   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-793101 --control-plane -v=7 --alsologtostderr: (39.111055467s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-793101 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-823037 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0401 20:05:20.959255   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-823037 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (41.386942429s)
--- PASS: TestJSONOutput/start/Command (41.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-823037 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-823037 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-823037 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-823037 --output=json --user=testUser: (5.824415689s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-546851 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-546851 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.404597ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb11c193-6b06-4c3d-8c54-741b3329d6d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-546851] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9148d13-24f5-43f0-b430-49eb0ce5a295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20506"}}
	{"specversion":"1.0","id":"7d0c8a2c-b637-4360-9592-14d97cc74c59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c2237efe-6f09-4bc2-9538-10aee1f9f181","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig"}}
	{"specversion":"1.0","id":"3cdddb16-0251-4bcb-8e19-2cf81c475475","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube"}}
	{"specversion":"1.0","id":"053145c1-0426-49fc-b65c-db16a93d715c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6224d2cc-d88a-4540-82a1-ae7d70443fe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"512d69d0-72e6-441e-b29b-b3acec1464fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-546851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-546851
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-144858 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-144858 --network=: (33.781422486s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-144858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-144858
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-144858: (2.017205568s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.82s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-918813 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-918813 --network=bridge: (20.736078358s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-918813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-918813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-918813: (1.895628143s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.65s)

                                                
                                    
x
+
TestKicExistingNetwork (25.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0401 20:07:00.032065   23163 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0401 20:07:00.048600   23163 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0401 20:07:00.048667   23163 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0401 20:07:00.048684   23163 cli_runner.go:164] Run: docker network inspect existing-network
W0401 20:07:00.065016   23163 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0401 20:07:00.065044   23163 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0401 20:07:00.065058   23163 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0401 20:07:00.065189   23163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0401 20:07:00.083228   23163 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-64a5a6ce16e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:10:1d:21:82:a2} reservation:<nil>}
I0401 20:07:00.083646   23163 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001613120}
I0401 20:07:00.083693   23163 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0401 20:07:00.083778   23163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0401 20:07:00.129127   23163 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-147874 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-147874 --network=existing-network: (23.894042778s)
helpers_test.go:175: Cleaning up "existing-network-147874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-147874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-147874: (1.915422713s)
I0401 20:07:25.955374   23163 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.94s)

                                                
                                    
x
+
TestKicCustomSubnet (23.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-378573 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-378573 --subnet=192.168.60.0/24: (21.33943454s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-378573 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-378573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-378573
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-378573: (2.069162779s)
--- PASS: TestKicCustomSubnet (23.43s)

                                                
                                    
x
+
TestKicStaticIP (26.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-590806 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-590806 --static-ip=192.168.200.200: (24.487776219s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-590806 ip
helpers_test.go:175: Cleaning up "static-ip-590806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-590806
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-590806: (2.053226753s)
--- PASS: TestKicStaticIP (26.66s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (48.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-639755 --driver=docker  --container-runtime=crio
E0401 20:08:26.124917   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-639755 --driver=docker  --container-runtime=crio: (20.379304355s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-650266 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-650266 --driver=docker  --container-runtime=crio: (22.719593382s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-639755
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-650266
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-650266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-650266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-650266: (1.821540835s)
helpers_test.go:175: Cleaning up "first-639755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-639755
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-639755: (2.240003073s)
--- PASS: TestMinikubeProfile (48.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-085122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-085122 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.123594056s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-085122 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097272 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097272 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.173714835s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-085122 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-085122 --alsologtostderr -v=5: (1.591201363s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-097272
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-097272: (1.170523802s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.81s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097272
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097272: (6.813756288s)
--- PASS: TestMountStart/serial/RestartStopped (7.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097272 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910416 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0401 20:09:49.189082   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
E0401 20:09:53.251929   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910416 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.315435102s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-910416 -- rollout status deployment/busybox: (3.96705434s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-7dfcj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-pvxlw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-7dfcj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-pvxlw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-7dfcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-pvxlw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-7dfcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-7dfcj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-pvxlw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-910416 -- exec busybox-58667487b6-pvxlw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (29.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-910416 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-910416 -v 3 --alsologtostderr: (29.165304393s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (29.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-910416 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp testdata/cp-test.txt multinode-910416:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2628790160/001/cp-test_multinode-910416.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416:/home/docker/cp-test.txt multinode-910416-m02:/home/docker/cp-test_multinode-910416_multinode-910416-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test_multinode-910416_multinode-910416-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416:/home/docker/cp-test.txt multinode-910416-m03:/home/docker/cp-test_multinode-910416_multinode-910416-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test_multinode-910416_multinode-910416-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp testdata/cp-test.txt multinode-910416-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2628790160/001/cp-test_multinode-910416-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m02:/home/docker/cp-test.txt multinode-910416:/home/docker/cp-test_multinode-910416-m02_multinode-910416.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test_multinode-910416-m02_multinode-910416.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m02:/home/docker/cp-test.txt multinode-910416-m03:/home/docker/cp-test_multinode-910416-m02_multinode-910416-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test_multinode-910416-m02_multinode-910416-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp testdata/cp-test.txt multinode-910416-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2628790160/001/cp-test_multinode-910416-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m03:/home/docker/cp-test.txt multinode-910416:/home/docker/cp-test_multinode-910416-m03_multinode-910416.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416 "sudo cat /home/docker/cp-test_multinode-910416-m03_multinode-910416.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 cp multinode-910416-m03:/home/docker/cp-test.txt multinode-910416-m02:/home/docker/cp-test_multinode-910416-m03_multinode-910416-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 ssh -n multinode-910416-m02 "sudo cat /home/docker/cp-test_multinode-910416-m03_multinode-910416-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-910416 node stop m03: (1.172974963s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910416 status: exit status 7 (463.837628ms)

                                                
                                                
-- stdout --
	multinode-910416
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910416-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910416-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr: exit status 7 (456.739391ms)

                                                
                                                
-- stdout --
	multinode-910416
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-910416-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-910416-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:11:26.941410  170439 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:11:26.941678  170439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:11:26.941687  170439 out.go:358] Setting ErrFile to fd 2...
	I0401 20:11:26.941691  170439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:11:26.941906  170439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:11:26.942059  170439 out.go:352] Setting JSON to false
	I0401 20:11:26.942085  170439 mustload.go:65] Loading cluster: multinode-910416
	I0401 20:11:26.942134  170439 notify.go:220] Checking for updates...
	I0401 20:11:26.942455  170439 config.go:182] Loaded profile config "multinode-910416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:11:26.942476  170439 status.go:174] checking status of multinode-910416 ...
	I0401 20:11:26.942863  170439 cli_runner.go:164] Run: docker container inspect multinode-910416 --format={{.State.Status}}
	I0401 20:11:26.960936  170439 status.go:371] multinode-910416 host status = "Running" (err=<nil>)
	I0401 20:11:26.960983  170439 host.go:66] Checking if "multinode-910416" exists ...
	I0401 20:11:26.961347  170439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910416
	I0401 20:11:26.979071  170439 host.go:66] Checking if "multinode-910416" exists ...
	I0401 20:11:26.979319  170439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:11:26.979369  170439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910416
	I0401 20:11:26.996227  170439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/multinode-910416/id_rsa Username:docker}
	I0401 20:11:27.086807  170439 ssh_runner.go:195] Run: systemctl --version
	I0401 20:11:27.090518  170439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:11:27.100749  170439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:11:27.149108  170439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-04-01 20:11:27.140188827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:11:27.149604  170439 kubeconfig.go:125] found "multinode-910416" server: "https://192.168.67.2:8443"
	I0401 20:11:27.149632  170439 api_server.go:166] Checking apiserver status ...
	I0401 20:11:27.149661  170439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0401 20:11:27.160044  170439 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	I0401 20:11:27.169051  170439 api_server.go:182] apiserver freezer: "5:freezer:/docker/150218feb28870f5196ce9f7f85a5553a0e1d5cc104cfb3c2540454a56a730b8/crio/crio-678b93658eb99a131638fae16d0a317b25c5897ff73d94794cf0d543be72329d"
	I0401 20:11:27.169103  170439 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/150218feb28870f5196ce9f7f85a5553a0e1d5cc104cfb3c2540454a56a730b8/crio/crio-678b93658eb99a131638fae16d0a317b25c5897ff73d94794cf0d543be72329d/freezer.state
	I0401 20:11:27.176546  170439 api_server.go:204] freezer state: "THAWED"
	I0401 20:11:27.176575  170439 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0401 20:11:27.180199  170439 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0401 20:11:27.180218  170439 status.go:463] multinode-910416 apiserver status = Running (err=<nil>)
	I0401 20:11:27.180226  170439 status.go:176] multinode-910416 status: &{Name:multinode-910416 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:11:27.180239  170439 status.go:174] checking status of multinode-910416-m02 ...
	I0401 20:11:27.180576  170439 cli_runner.go:164] Run: docker container inspect multinode-910416-m02 --format={{.State.Status}}
	I0401 20:11:27.198363  170439 status.go:371] multinode-910416-m02 host status = "Running" (err=<nil>)
	I0401 20:11:27.198385  170439 host.go:66] Checking if "multinode-910416-m02" exists ...
	I0401 20:11:27.198608  170439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-910416-m02
	I0401 20:11:27.215504  170439 host.go:66] Checking if "multinode-910416-m02" exists ...
	I0401 20:11:27.215758  170439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0401 20:11:27.215791  170439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-910416-m02
	I0401 20:11:27.232769  170439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20506-16361/.minikube/machines/multinode-910416-m02/id_rsa Username:docker}
	I0401 20:11:27.323047  170439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0401 20:11:27.333628  170439 status.go:176] multinode-910416-m02 status: &{Name:multinode-910416-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:11:27.333663  170439 status.go:174] checking status of multinode-910416-m03 ...
	I0401 20:11:27.334018  170439 cli_runner.go:164] Run: docker container inspect multinode-910416-m03 --format={{.State.Status}}
	I0401 20:11:27.352130  170439 status.go:371] multinode-910416-m03 host status = "Stopped" (err=<nil>)
	I0401 20:11:27.352153  170439 status.go:384] host is not running, skipping remaining checks
	I0401 20:11:27.352159  170439 status.go:176] multinode-910416-m03 status: &{Name:multinode-910416-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-910416 node start m03 -v=7 --alsologtostderr: (8.332813548s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910416
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-910416
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-910416: (24.66895255s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910416 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910416 --wait=true -v=8 --alsologtostderr: (1m4.736978385s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910416
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-910416 node delete m03: (4.389473013s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 stop
E0401 20:13:26.126472   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-910416 stop: (23.512579371s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910416 status: exit status 7 (86.512035ms)

                                                
                                                
-- stdout --
	multinode-910416
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910416-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr: exit status 7 (80.457211ms)

                                                
                                                
-- stdout --
	multinode-910416
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-910416-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:13:34.438005  179822 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:13:34.438112  179822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:13:34.438121  179822 out.go:358] Setting ErrFile to fd 2...
	I0401 20:13:34.438124  179822 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:13:34.438344  179822 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:13:34.438480  179822 out.go:352] Setting JSON to false
	I0401 20:13:34.438506  179822 mustload.go:65] Loading cluster: multinode-910416
	I0401 20:13:34.438540  179822 notify.go:220] Checking for updates...
	I0401 20:13:34.438875  179822 config.go:182] Loaded profile config "multinode-910416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:13:34.438896  179822 status.go:174] checking status of multinode-910416 ...
	I0401 20:13:34.439283  179822 cli_runner.go:164] Run: docker container inspect multinode-910416 --format={{.State.Status}}
	I0401 20:13:34.457365  179822 status.go:371] multinode-910416 host status = "Stopped" (err=<nil>)
	I0401 20:13:34.457386  179822 status.go:384] host is not running, skipping remaining checks
	I0401 20:13:34.457392  179822 status.go:176] multinode-910416 status: &{Name:multinode-910416 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0401 20:13:34.457422  179822 status.go:174] checking status of multinode-910416-m02 ...
	I0401 20:13:34.457687  179822 cli_runner.go:164] Run: docker container inspect multinode-910416-m02 --format={{.State.Status}}
	I0401 20:13:34.474919  179822 status.go:371] multinode-910416-m02 host status = "Stopped" (err=<nil>)
	I0401 20:13:34.474964  179822 status.go:384] host is not running, skipping remaining checks
	I0401 20:13:34.474974  179822 status.go:176] multinode-910416-m02 status: &{Name:multinode-910416-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (43.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910416 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910416 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (43.425478533s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-910416 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (43.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-910416
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910416-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-910416-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.987093ms)

                                                
                                                
-- stdout --
	* [multinode-910416-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-910416-m02' is duplicated with machine name 'multinode-910416-m02' in profile 'multinode-910416'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-910416-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-910416-m03 --driver=docker  --container-runtime=crio: (22.831452513s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-910416
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-910416: exit status 80 (263.949866ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-910416 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-910416-m03 already exists in multinode-910416-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-910416-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-910416-m03: (1.836893781s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.04s)

                                                
                                    
x
+
TestPreload (113.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-604157 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0401 20:14:53.252393   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-604157 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m15.954140371s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-604157 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-604157 image pull gcr.io/k8s-minikube/busybox: (3.246867239s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-604157
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-604157: (5.703395416s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-604157 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0401 20:16:16.321256   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-604157 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (25.755658489s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-604157 image list
helpers_test.go:175: Cleaning up "test-preload-604157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-604157
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-604157: (2.267242332s)
--- PASS: TestPreload (113.15s)

                                                
                                    
x
+
TestScheduledStopUnix (96.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-560984 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-560984 --memory=2048 --driver=docker  --container-runtime=crio: (20.624596526s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560984 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-560984 -n scheduled-stop-560984
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560984 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0401 20:17:01.622650   23163 retry.go:31] will retry after 128.793µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.623810   23163 retry.go:31] will retry after 116.267µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.624949   23163 retry.go:31] will retry after 170.2µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.626068   23163 retry.go:31] will retry after 383.375µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.627200   23163 retry.go:31] will retry after 561.91µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.628318   23163 retry.go:31] will retry after 521.916µs: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.629449   23163 retry.go:31] will retry after 1.494881ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.631668   23163 retry.go:31] will retry after 1.35747ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.633898   23163 retry.go:31] will retry after 2.148456ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.637108   23163 retry.go:31] will retry after 2.838302ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.640350   23163 retry.go:31] will retry after 3.880324ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.644561   23163 retry.go:31] will retry after 12.88672ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.657834   23163 retry.go:31] will retry after 13.507754ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.672067   23163 retry.go:31] will retry after 27.051462ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
I0401 20:17:01.699281   23163 retry.go:31] will retry after 19.602634ms: open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/scheduled-stop-560984/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560984 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560984 -n scheduled-stop-560984
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560984
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-560984 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-560984
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-560984: exit status 7 (66.725756ms)

                                                
                                                
-- stdout --
	scheduled-stop-560984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560984 -n scheduled-stop-560984
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-560984 -n scheduled-stop-560984: exit status 7 (63.73007ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-560984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-560984
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-560984: (4.639035299s)
--- PASS: TestScheduledStopUnix (96.55s)

                                                
                                    
x
+
TestInsufficientStorage (9.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-884482 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-884482 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.532485953s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d1f13a2-8c0e-4483-8fc3-666c2000aeb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-884482] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9aaaf4fa-2378-414d-8544-5854b39c259f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20506"}}
	{"specversion":"1.0","id":"c265aade-015c-4088-bb50-09cc4d73473e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbdbf945-5f86-4366-8a5a-6d2c49d99f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig"}}
	{"specversion":"1.0","id":"ea27462b-ec3a-4b8a-8703-d78fa982de03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube"}}
	{"specversion":"1.0","id":"d34415e1-5c98-4b0b-82ce-eff09cfc91db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0a7a8b43-8a58-4fbc-883d-2f9b2ecabe79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a3aa539c-6d2f-4036-a6e7-e9409861cf46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f0ba099f-99dd-4da7-9304-792f7e56e506","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"161b34e3-6f72-4ac1-bf0e-2da88d91c829","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"af76b3db-9968-4adf-b284-e83e6ff0bb42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b638204a-d586-4483-add3-52f2a9a27c1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-884482\" primary control-plane node in \"insufficient-storage-884482\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"34061f0d-e207-461d-8e77-decca4ee6f00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1741860993-20523 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"84d5ffa1-6129-4342-a591-efc6dd0eb2d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"34615d4e-3f5d-444f-b14c-b90a4ea19b7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-884482 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-884482 --output=json --layout=cluster: exit status 7 (256.693289ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884482","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884482","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 20:18:24.923935  202512 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884482" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-884482 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-884482 --output=json --layout=cluster: exit status 7 (253.137005ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-884482","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-884482","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0401 20:18:25.177687  202611 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-884482" does not appear in /home/jenkins/minikube-integration/20506-16361/kubeconfig
	E0401 20:18:25.187034  202611 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/insufficient-storage-884482/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-884482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-884482
E0401 20:18:26.124820   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/addons-649141/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-884482: (1.827138021s)
--- PASS: TestInsufficientStorage (9.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4024986335 start -p running-upgrade-708696 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4024986335 start -p running-upgrade-708696 --memory=2200 --vm-driver=docker  --container-runtime=crio: (27.399596445s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-708696 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-708696 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.483944355s)
helpers_test.go:175: Cleaning up "running-upgrade-708696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-708696
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-708696: (1.976718231s)
--- PASS: TestRunningBinaryUpgrade (79.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.412851854s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-337773
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-337773: (5.025856632s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-337773 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-337773 status --format={{.Host}}: exit status 7 (68.924197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.746763351s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-337773 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (95.39425ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-337773] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-337773
	    minikube start -p kubernetes-upgrade-337773 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3377732 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-337773 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-337773 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.321975178s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-337773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-337773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-337773: (4.331914586s)
--- PASS: TestKubernetesUpgrade (350.08s)

                                                
                                    
x
+
TestMissingContainerUpgrade (162.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1508160132 start -p missing-upgrade-569773 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1508160132 start -p missing-upgrade-569773 --memory=2200 --driver=docker  --container-runtime=crio: (1m27.003637163s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-569773
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-569773: (12.278294466s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-569773
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-569773 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-569773 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (57.238825479s)
helpers_test.go:175: Cleaning up "missing-upgrade-569773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-569773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-569773: (3.88898152s)
--- PASS: TestMissingContainerUpgrade (162.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.32527ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-578451] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578451 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578451 --driver=docker  --container-runtime=crio: (29.658496821s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-578451 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --driver=docker  --container-runtime=crio: (29.79040536s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-578451 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-578451 status -o json: exit status 2 (334.003267ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-578451","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-578451
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-578451: (2.033486575s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578451 --no-kubernetes --driver=docker  --container-runtime=crio: (8.762604825s)
--- PASS: TestNoKubernetes/serial/Start (8.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-578451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-578451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (311.656528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.245841657s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-578451
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-578451: (1.234184635s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578451 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578451 --driver=docker  --container-runtime=crio: (7.436657234s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-578451 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-578451 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.134974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4100742859 start -p stopped-upgrade-425539 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4100742859 start -p stopped-upgrade-425539 --memory=2200 --vm-driver=docker  --container-runtime=crio: (26.36173365s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4100742859 -p stopped-upgrade-425539 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4100742859 -p stopped-upgrade-425539 stop: (2.220507872s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-425539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-425539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.465610693s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-460236 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-460236 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (146.425453ms)

                                                
                                                
-- stdout --
	* [false-460236] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20506
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0401 20:21:22.264141  246930 out.go:345] Setting OutFile to fd 1 ...
	I0401 20:21:22.264373  246930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:21:22.264381  246930 out.go:358] Setting ErrFile to fd 2...
	I0401 20:21:22.264385  246930 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0401 20:21:22.264617  246930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20506-16361/.minikube/bin
	I0401 20:21:22.265192  246930 out.go:352] Setting JSON to false
	I0401 20:21:22.266416  246930 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3828,"bootTime":1743535054,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0401 20:21:22.266514  246930 start.go:139] virtualization: kvm guest
	I0401 20:21:22.268204  246930 out.go:177] * [false-460236] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0401 20:21:22.269250  246930 out.go:177]   - MINIKUBE_LOCATION=20506
	I0401 20:21:22.269282  246930 notify.go:220] Checking for updates...
	I0401 20:21:22.271408  246930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0401 20:21:22.272738  246930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20506-16361/kubeconfig
	I0401 20:21:22.273762  246930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20506-16361/.minikube
	I0401 20:21:22.274811  246930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0401 20:21:22.275836  246930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0401 20:21:22.277384  246930 config.go:182] Loaded profile config "cert-expiration-884182": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:21:22.277510  246930 config.go:182] Loaded profile config "kubernetes-upgrade-337773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
	I0401 20:21:22.277621  246930 config.go:182] Loaded profile config "stopped-upgrade-425539": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0401 20:21:22.277726  246930 driver.go:394] Setting default libvirt URI to qemu:///system
	I0401 20:21:22.301647  246930 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0401 20:21:22.301791  246930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0401 20:21:22.356415  246930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:74 SystemTime:2025-04-01 20:21:22.346821845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0401 20:21:22.356561  246930 docker.go:318] overlay module found
	I0401 20:21:22.358193  246930 out.go:177] * Using the docker driver based on user configuration
	I0401 20:21:22.359352  246930 start.go:297] selected driver: docker
	I0401 20:21:22.359367  246930 start.go:901] validating driver "docker" against <nil>
	I0401 20:21:22.359378  246930 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0401 20:21:22.361468  246930 out.go:201] 
	W0401 20:21:22.362485  246930 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0401 20:21:22.363605  246930 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-460236 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-884182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:21:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-337773
contexts:
- context:
cluster: cert-expiration-884182
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-884182
name: cert-expiration-884182
- context:
cluster: kubernetes-upgrade-337773
user: kubernetes-upgrade-337773
name: kubernetes-upgrade-337773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-884182
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.key
- name: kubernetes-upgrade-337773
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-460236

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-460236"

                                                
                                                
----------------------- debugLogs end: false-460236 [took: 2.793202524s] --------------------------------
helpers_test.go:175: Cleaning up "false-460236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-460236
--- PASS: TestNetworkPlugins/group/false (3.09s)

                                                
                                    
x
+
TestPause/serial/Start (42.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-631132 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-631132 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.447912127s)
--- PASS: TestPause/serial/Start (42.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-631132 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-631132 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.804888318s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-425539
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.684081896s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-631132 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-631132 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-631132 --output=json --layout=cluster: exit status 2 (301.662198ms)

                                                
                                                
-- stdout --
	{"Name":"pause-631132","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-631132","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-631132 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-631132 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-631132 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-631132 --alsologtostderr -v=5: (2.787247169s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (29.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (29.129110494s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-631132
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-631132: exit status 1 (20.319028ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-631132: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (29.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (42.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (42.888832494s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (42.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-460236 "pgrep -a kubelet"
I0401 20:23:05.389743   23163 config.go:182] Loaded profile config "auto-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-nx5rs" [2e2a7d19-746b-4f2d-a437-703089a50cbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-nx5rs" [2e2a7d19-746b-4f2d-a437-703089a50cbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003141708s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (56.303398435s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.800400983s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xwd62" [7bf29a05-1595-40ec-8ae0-7c17dc8a03a0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004668777s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-460236 "pgrep -a kubelet"
I0401 20:23:51.757106   23163 config.go:182] Loaded profile config "kindnet-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kcmdm" [a6276424-a837-48b7-978a-96f6b90726e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-kcmdm" [a6276424-a837-48b7-978a-96f6b90726e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003645614s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6qnsr" [4213ab18-b9ae-4f84-9182-e8280f70ca3f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00427179s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-460236 "pgrep -a kubelet"
I0401 20:24:13.332835   23163 config.go:182] Loaded profile config "calico-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zz2xr" [e8a11700-5901-4ccd-97f5-2df326500f9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zz2xr" [e8a11700-5901-4ccd-97f5-2df326500f9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004700534s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (34.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (34.228112082s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (34.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-460236 "pgrep -a kubelet"
I0401 20:24:29.607164   23163 config.go:182] Loaded profile config "custom-flannel-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t5gfl" [47721121-9a61-4aa8-a32b-9194c3506ae0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t5gfl" [47721121-9a61-4aa8-a32b-9194c3506ae0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003866072s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0401 20:24:53.251697   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/functional-432066/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.443213342s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-460236 "pgrep -a kubelet"
I0401 20:24:56.539192   23163 config.go:182] Loaded profile config "enable-default-cni-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xnhsb" [f0cdfa3c-27cb-4fca-85da-74fedbc07832] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xnhsb" [f0cdfa3c-27cb-4fca-85da-74fedbc07832] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003765925s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-460236 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (36.241709705s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-460236 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-460236 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145788887s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0401 20:25:21.893893   23163 retry.go:31] will retry after 835.394493ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-460236 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-460236 exec deployment/netcat -- nslookup kubernetes.default: (5.129324502s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-460236 "pgrep -a kubelet"
I0401 20:25:37.505007   23163 config.go:182] Loaded profile config "bridge-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-v8vx4" [9098de03-3504-42d1-a8e0-03f9234c4b14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-v8vx4" [9098de03-3504-42d1-a8e0-03f9234c4b14] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003049159s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-76579" [944f8047-b48f-44cd-bdb4-b0f0c15bfc42] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003267778s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-460236 "pgrep -a kubelet"
I0401 20:25:43.835355   23163 config.go:182] Loaded profile config "flannel-460236": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-460236 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-p5n4x" [b3edcee6-fd69-4bfd-a55f-d40f2ef24965] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-p5n4x" [b3edcee6-fd69-4bfd-a55f-d40f2ef24965] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003644862s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-460236 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-460236 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-671514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-671514 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-671514 --alsologtostderr -v=3
E0401 20:38:45.467862   23163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kindnet-460236/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-671514 --alsologtostderr -v=3: (1.207008807s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-671514 -n no-preload-671514: exit status 7 (81.6962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-671514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-974821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-974821 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-964633 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-964633 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-993330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-993330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-974821 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-974821 --alsologtostderr -v=3: (1.237139986s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-964633 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-964633 --alsologtostderr -v=3: (1.273933481s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-993330 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-993330 --alsologtostderr -v=3: (1.237846361s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-974821 -n embed-certs-974821: exit status 7 (107.027842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-974821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-964633 -n old-k8s-version-964633: exit status 7 (92.219367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-964633 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993330 -n default-k8s-diff-port-993330: exit status 7 (90.037406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-993330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-235733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-235733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (28.098066246s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-235733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-235733 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-235733 --alsologtostderr -v=3: (1.205653069s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235733 -n newest-cni-235733
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235733 -n newest-cni-235733: exit status 7 (63.382734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-235733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-235733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-235733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.2: (12.166570609s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-235733 -n newest-cni-235733
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-235733 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-235733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235733 -n newest-cni-235733
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235733 -n newest-cni-235733: exit status 2 (279.84268ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235733 -n newest-cni-235733
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235733 -n newest-cni-235733: exit status 2 (279.859171ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-235733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-235733 -n newest-cni-235733
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-235733 -n newest-cni-235733
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    

Test skip (27/323)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-649141 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:631: 
----------------------- debugLogs start: kubenet-460236 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-884182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:21:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-337773
contexts:
- context:
cluster: cert-expiration-884182
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-884182
name: cert-expiration-884182
- context:
cluster: kubernetes-upgrade-337773
user: kubernetes-upgrade-337773
name: kubernetes-upgrade-337773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-884182
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.key
- name: kubernetes-upgrade-337773
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-460236

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-460236"

                                                
                                                
----------------------- debugLogs end: kubenet-460236 [took: 2.929295764s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-460236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-460236
--- SKIP: TestNetworkPlugins/group/kubenet (3.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-460236 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-460236" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-884182
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20506-16361/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:21:04 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-337773
contexts:
- context:
cluster: cert-expiration-884182
extensions:
- extension:
last-update: Tue, 01 Apr 2025 20:19:44 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-884182
name: cert-expiration-884182
- context:
cluster: kubernetes-upgrade-337773
user: kubernetes-upgrade-337773
name: kubernetes-upgrade-337773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-884182
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/cert-expiration-884182/client.key
- name: kubernetes-upgrade-337773
user:
client-certificate: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.crt
client-key: /home/jenkins/minikube-integration/20506-16361/.minikube/profiles/kubernetes-upgrade-337773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-460236

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-460236" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-460236"

                                                
                                                
----------------------- debugLogs end: cilium-460236 [took: 3.335437238s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-460236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-460236
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-564557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-564557
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard